title
stringlengths 1
275
| id
stringlengths 9
16
| abstract
stringlengths 320
3.75k
| categories
stringlengths 5
104
| doi
stringlengths 3
151
| created
timestamp[ns] | updated
timestamp[ns] | authors
stringlengths 5
9.47k
| url
stringlengths 31
38
| abstract_length
int64 320
3.75k
| id_n
int64 0
556k
|
---|---|---|---|---|---|---|---|---|---|---|
treewidth reduction for constrained separation and bipartization problems | 0902.3780 | we present a method for reducing the treewidth of a graph while preservingall the minimal $s-t$ separators. this technique turns out to be very usefulfor establishing the fixed-parameter tractability of constrained separation andbipartization problems. to demonstrate the power of this technique, we provethe fixed-parameter tractability of a number of well-known separation andbipartization problems with various additional restrictions (e.g., the verticesbeing removed from the graph form an independent set). these results answer anumber of open questions in the area of parameterized complexity. | cs.ds cs.dm | nan | 2009-02-22T00:00:00 | 2010-02-03T00:00:00 | ['marx', "o'sullivan", 'razgon'] | https://arxiv.org/abs/0902.3780 | 599 | 300 |
on graph theoretic results underlying the analysis of consensus in multi-agent systems | 0902.4218 | this note corrects a pretty serious mistake and some inaccuracies in"consensus and cooperation in networked multi-agent systems" by r.olfati-saber, j.a. fax, and r.m. murray, published in vol. 95 of theproceedings of the ieee (2007, no. 1, p. 215-233). it also mentions severalstronger results applicable to the class of problems under consideration andaddresses the issue of priority whose interpretation in the above-mentionedpaper is not exact. | cs.ma cs.dm math.co math.oc | 10.1109/jproc.2010.2049911 | 2009-02-24T00:00:00 | null | ['chebotarev'] | https://arxiv.org/abs/0902.4218 | 447 | 301 |
using distributed rate-splitting game to approach rate region boundary of the gaussian interference channel | 0902.4577 | determining how to approach the rate boundary of the gaussian interferencechannel in practical system is a big concern. in this paper, a distributedrate-splitting (drs) scheme is proposed to approach the rate region boundary ofthe gaussian interference channel. it is shown that the drs scheme can beformulated as a non-cooperative game. we introduce the stackelberg equilibrium(se) with multiple leaders as the equilibrium point of the non-cooperativegame. therefore, an iterative multiple waterlevels water-filling algorithm(iml-wfa) is developed to efficiently reach the se of the non-cooperative game.the existence of se is established for the game. numerical examples show thatthe rate-tuples achieved by the drs are very close to the boundary of thewell-known hk region. | cs.it math.it | nan | 2009-02-26T00:00:00 | 2010-03-22T00:00:00 | ['jing', 'bai', 'ma'] | https://arxiv.org/abs/0902.4577 | 776 | 302 |
faith in the algorithm, part 1: beyond the turing test | 0903.0200 | since the turing test was first proposed by alan turing in 1950, the primarygoal of artificial intelligence has been predicated on the ability forcomputers to imitate human behavior. however, the majority of uses for thecomputer can be said to fall outside the domain of human abilities and it isexactly outside of this domain where computers have demonstrated their greatestcontribution to intelligence. another goal for artificial intelligence is onethat is not predicated on human mimicry, but instead, on human amplification.this article surveys various systems that contribute to the advancement ofhuman and social intelligence. | cs.cy cs.ai | nan | 2009-03-01T00:00:00 | null | ['rodriguez', 'pepe'] | https://arxiv.org/abs/0903.0200 | 633 | 303 |
circuit design for a measurement-based quantum carry-lookahead adder | 0903.0748 | we present the design and evaluation of a quantum carry-lookahead adder(qcla) using measurement-based quantum computation (mbqc), called mbqcla. qclawas originally designed for an abstract, concurrent architecture supportinglong-distance communication, but most realistic architectures heavily constraincommunication distances. the quantum carry-lookahead adder is faster than aquantum ripple-carry adder; qcla has logarithmic depth while ripple adders havelinear depth. mbqcla utilizes mbqc's ability to transfer quantum states in unittime to accelerate addition. mbqcla breaks the latency limit of additioncircuits in nearest neighbor-only architectures : compared to the $\theta(n)$limit on circuit depth for linear nearest-neighbor architectures, it can reach$\theta(log n)$ depth. mbqcla is an order of magnitude faster than aripple-carry adder when adding registers longer than 100 qubits, but requires acluster state that is an order of magnitude larger. the cluster state resourcescan be classified as computation and communication; for the unoptimized form,$\approx$ 88 % of the resources are used for communication. hand optimizationof horizontal communication costs results in a $\approx$ 12% reduction inspatial resources for the in-place mbqcla circuit. for comparison, a graphstate quantum carry-lookahead adder (gsqcla) uses only $\approx$ 9 % of thespatial resources of the mbqcla. | quant-ph cs.ar | 10.1142/s0219749910006496 | 2009-03-04T00:00:00 | 2009-10-14T00:00:00 | ['trisetyarso', 'van meter'] | https://arxiv.org/abs/0903.0748 | 1,397 | 304 |
quantum information science and nanotechnology | 0903.1204 | in this note is touched upon an application of quantum information science(qis) in nanotechnology area. the laws of quantum mechanics may be veryimportant for nano-scale objects. a problem with simulating of quantum systemsis well known and quantum computer was initially suggested by r. feynman justas the way to overcome such difficulties. mathematical methods developed in qisalso may be applied for description of nano-devices. few illustrative examplesare mentioned and they may be related with so-called fourth generation ofnanotechnology products. | quant-ph cs.oh | nan | 2009-03-06T00:00:00 | null | ['vlasov'] | https://arxiv.org/abs/0903.1204 | 554 | 305 |
on the growth rate of the weight distribution of irregular doubly-generalized ldpc codes | 0903.1588 | in this paper, an expression for the asymptotic growth rate of the number ofsmall linear-weight codewords of irregular doubly-generalized ldpc (d-gldpc)codes is derived. the expression is compact and generalizes existing resultsfor ldpc and generalized ldpc (gldpc) codes. ensembles with check or variablenode minimum distance greater than 2 are shown to be have good growth ratebehavior, while for other ensembles a fundamental parameter is identified whichdiscriminates between an asymptotically small and an asymptotically largeexpected number of small linear-weight codewords. also, in the latter case itis shown that the growth rate depends only on the check and variable nodes withminimum distance 2. an important connection between this new result and thestability condition of d-gldpc codes over the bec is highlighted. such aconnection, previously observed for ldpc and gldpc codes, is now extended tothe case of d-gldpc codes. finally, it is shown that the analysis may beextended to include the growth rate of the stopping set size distribution ofirregular d-gldpc codes. | cs.it math.it | nan | 2009-03-09T00:00:00 | 2010-05-04T00:00:00 | ['flanagan', 'paolini', 'chiani', 'fossorier'] | https://arxiv.org/abs/0903.1588 | 1,082 | 306 |
susceptibility propagation for constraint satisfaction problems | 0903.1621 | we study the susceptibility propagation, a message-passing algorithm tocompute correlation functions. it is applied to constraint satisfactionproblems and its accuracy is examined. as a heuristic method to find asatisfying assignment, we propose susceptibility-guided decimation wherecorrelations among the variables play an important role. we apply this noveldecimation to locked occupation problems, a class of hard constraintsatisfaction problems exhibited recently. it is shown that the present methodperforms better than the standard belief-guided decimation. | cond-mat.dis-nn cond-mat.stat-mech cs.it math.it | 10.1088/1742-6596/233/1/012003 | 2009-03-09T00:00:00 | null | ['higuchi', 'mézard'] | https://arxiv.org/abs/0903.1621 | 564 | 307 |
faceted exploration of emerging resource spaces | 0903.1680 | humans have the ability to regcognize the real world from different facets.faceted exploration is a mechanism for browsing and understanding large-scaleresources in information network by multiple facets. this paper proposes anemerging resource space model, whose schema is a partially ordered set ofconcepts with subclassof relation and each resource is categorized by multipleconcepts. emering resource space (ers) is a class of resources characterized bya concept set. erses compose a lattice (ersl) via concept association. a seriesof exploration operations is proposed to guide users to explore through erslwith more demanding and richer semantics than current faceted navigation. tofulfill instant response during faceted exploration, we devise an efficientalgorithm for mining and indexing ersl. the proposed model can effectivelysupport faceted exploration in various applications from personal informationmanagement to large-scale information sharing. | cs.db cs.dl cs.hc | nan | 2009-03-10T00:00:00 | 2010-10-05T00:00:00 | ['zhuge', 'he'] | https://arxiv.org/abs/0903.1680 | 960 | 308 |
phase transitions and random quantum satisfiability | 0903.1904 | alongside the effort underway to build quantum computers, it is important tobetter understand which classes of problems they will find easy and whichothers even they will find intractable. we study random ensembles of theqma$_1$-complete quantum satisfiability (qsat) problem introduced by bravyi.qsat appropriately generalizes the np-complete classical satisfiability (sat)problem. we show that, as the density of clauses/projectors is varied, theensembles exhibit quantum phase transitions between phases that are satisfiableand unsatisfiable. remarkably, almost all instances of qsat for any hypergraphexhibit the same dimension of the satisfying manifold. this establishes theqsat decision problem as equivalent to a, potentially new, graph theoreticproblem and that the hardest typical instances are likely to be localized in abounded range of clause density. | quant-ph cond-mat.dis-nn cond-mat.stat-mech cs.cc | nan | 2009-03-11T00:00:00 | null | ['laumann', 'moessner', 'scardicchio', 'sondhi'] | https://arxiv.org/abs/0903.1904 | 864 | 309 |
combinatorial deformations of algebras: twisting and perturbations | 0903.2101 | the framework used to prove the multiplicative law deformation of the algebraof feynman-bender diagrams is a \textit{twisted shifted dual law} (in fact,twice). we give here a clear interpretation of its two parameters. the crossingparameter is a deformation of the tensor structure whereas the superpositionparameters is a perturbation of the shuffle coproduct of hoffman type which, inturn, can be interpreted as the diagonal restriction of a superproduct. here,we systematically detail these constructions. | cs.sc math-ph math.co math.mp | nan | 2009-03-12T00:00:00 | 2010-08-27T00:00:00 | ['duchamp', 'tollu', 'penson', 'koshevoy'] | https://arxiv.org/abs/0903.2101 | 508 | 310 |
game theory and the frequency selective interference channel - a tutorial | 0903.2174 | this paper provides a tutorial overview of game theoretic techniques used forcommunication over frequency selective interference channels. we discuss bothcompetitive and cooperative techniques. keywords: game theory, competitive games, cooperative games, nashequilibrium, nash bargaining solution, generalized nash games, spectrumoptimization, distributed coordination, interference channel, multiple accesschannel, iterative water-filling. | cs.it cs.gt math.it | 10.1109/msp.2009.933372 | 2009-03-12T00:00:00 | null | ['leshem', 'zehavi'] | https://arxiv.org/abs/0903.2174 | 441 | 311 |
on the (semi)lattices induced by continuous reducibilities | 0903.2177 | continuous reducibilities are a proven tool in computable analysis, and haveapplications in other fields such as constructive mathematics or reversemathematics. we study the order-theoretic properties of several variants of thetwo most important definitions, and especially introduce suprema for them. thesuprema are shown to commutate with several characteristic numbers. | cs.lo | 10.1002/malq.200910104 | 2009-03-12T00:00:00 | 2010-10-21T00:00:00 | ['pauly'] | https://arxiv.org/abs/0903.2177 | 372 | 312 |
performance assessment of mimo-bicm demodulators based on system capacity | 0903.2711 | we provide a comprehensive performance comparison of soft-output andhard-output demodulators in the context of non-iterative multiple-inputmultiple-output bit-interleaved coded modulation (mimo-bicm). coded bit errorrate (ber), widely used in literature for demodulator comparison, has thedrawback of depending strongly on the error correcting code being used. thismotivates us to propose a code-independent performance measure in terms ofsystem capacity, i.e., mutual information of the equivalent modulation channelthat comprises modulator, wireless channel, and demodulator. we presentextensive numerical results for ergodic and quasi-static fading channels underperfect and imperfect channel state information. these results reveal that theperformance ranking of mimo demodulators is rate-dependent. furthermore, theyprovide new insights regarding mimo-bicm system design, i.e., the choice ofantenna configuration, symbol constellation, and demodulator for a given targetrate. | cs.it math.it | nan | 2009-03-16T00:00:00 | 2010-12-04T00:00:00 | ['fertl', 'jalden', 'matz'] | https://arxiv.org/abs/0903.2711 | 980 | 313 |
the perfect binary one-error-correcting codes of length 15: part ii--properties | 0903.2749 | a complete classification of the perfect binary one-error-correcting codes oflength 15 as well as their extensions of length 16 was recently carried out in[p. r. j. \"osterg{\aa}rd and o. pottonen, "the perfect binaryone-error-correcting codes of length 15: part i--classification," ieee trans.inform. theory vol. 55, pp. 4657--4660, 2009]. in the current accompanyingwork, the classified codes are studied in great detail, and their mainproperties are tabulated. the results include the fact that 33 of the 80steiner triple systems of order 15 occur in such codes. further understandingis gained on full-rank codes via switching, as it turns out that all but twofull-rank codes can be obtained through a series of such transformations fromthe hamming code. other topics studied include (non)systematic codes, embeddedone-error-correcting codes, and defining sets of codes. a classification ofcertain mixed perfect codes is also obtained. | cs.it math.it | 10.1109/tit.2010.2046197 | 2009-03-16T00:00:00 | 2010-01-10T00:00:00 | ['östergård', 'pottonen', 'phelps'] | https://arxiv.org/abs/0903.2749 | 938 | 314 |
compressive estimation of doubly selective channels in multicarrier systems: leakage effects and sparsity-enhancing processing | 0903.2774 | we consider the application of compressed sensing (cs) to the estimation ofdoubly selective channels within pulse-shaping multicarrier systems (whichinclude ofdm systems as a special case). by exploiting sparsity in thedelay-doppler domain, cs-based channel estimation allows for an increase inspectral efficiency through a reduction of the number of pilot symbols. forcombating leakage effects that limit the delay-doppler sparsity, we propose asparsity-enhancing basis expansion and a method for optimizing the basis withor without prior statistical information about the channel. we also present analternative cs-based channel estimator for (potentially) stronglytime-frequency dispersive channels, which is capable of estimating the"off-diagonal" channel coefficients characterizing intersymbol and intercarrierinterference (isi/ici). for this estimator, we propose a basis constructioncombining fourier (exponential) and prolate spheroidal sequences. simulationresults assess the performance gains achieved by the proposedsparsity-enhancing processing techniques and by explicit estimation of isi/icichannel coefficients. | cs.it math.it | 10.1109/jstsp.2010.2042410 | 2009-03-16T00:00:00 | 2010-05-07T00:00:00 | ['tauboeck', 'hlawatsch', 'eiwen', 'rauhut'] | https://arxiv.org/abs/0903.2774 | 1,126 | 315 |
a parameter-free hedging algorithm | 0903.2851 | we study the problem of decision-theoretic online learning (dtol). motivatedby practical applications, we focus on dtol when the number of actions is verylarge. previous algorithms for learning in this framework have a tunablelearning rate parameter, and a barrier to using online-learning in practicalapplications is that it is not understood how to set this parameter optimally,particularly when the number of actions is large. in this paper, we offer a clean solution by proposing a novel and completelyparameter-free algorithm for dtol. we introduce a new notion of regret, whichis more natural for applications with a large number of actions. we show thatour algorithm achieves good performance with respect to this new notion ofregret; in addition, it also achieves performance close to that of the bestbounds achieved by previous algorithms with optimally-tuned parameters,according to previous notions of regret. | cs.lg cs.ai | nan | 2009-03-16T00:00:00 | 2010-01-18T00:00:00 | ['chaudhuri', 'freund', 'hsu'] | https://arxiv.org/abs/0903.2851 | 921 | 316 |
tracking using explanation-based modeling | 0903.2862 | we study the tracking problem, namely, estimating the hidden state of anobject over time, from unreliable and noisy measurements. the standardframework for the tracking problem is the generative framework, which is thebasis of solutions such as the bayesian algorithm and its approximation, theparticle filters. however, the problem with these solutions is that they arevery sensitive to model mismatches. in this paper, motivated by onlinelearning, we introduce a new framework -- an {\em explanatory} framework -- fortracking. we provide an efficient tracking algorithm for this framework. weprovide experimental results comparing our algorithm to the bayesian algorithmon simulated data. our experiments show that when there are slight modelmismatches, our algorithm vastly outperforms the bayesian algorithm. | cs.lg cs.ai cs.cv | nan | 2009-03-16T00:00:00 | 2010-01-18T00:00:00 | ['chaudhuri', 'freund', 'hsu'] | https://arxiv.org/abs/0903.2862 | 812 | 317 |
kalman filtering with intermittent observations: weak convergence to a stationary distribution | 0903.2890 | the paper studies the asymptotic behavior of random algebraic riccatiequations (rare) arising in kalman filtering when the arrival of theobservations is described by a bernoulli i.i.d. process. we model the rare asan order-preserving, strongly sublinear random dynamical system (rds). under asufficient condition, stochastic boundedness, and using a limit-set dichotomyresult for order-preserving, strongly sublinear rds, we establish theasymptotic properties of the rare: the sequence of random prediction errorcovariance matrices converges weakly to a unique invariant distribution, whosesupport exhibits fractal behavior. in particular, this weak convergence holdsunder broad conditions and even when the observations arrival rate is below thecritical probability for mean stability. we apply the weak-feller property ofthe markov process governing the rare to characterize the support of thelimiting invariant distribution as the topological closure of a countable setof points, which, in general, is not dense in the set of positive semi-definitematrices. we use the explicit characterization of the support of the invariantdistribution and the almost sure ergodicity of the sample paths to easilycompute the moments of the invariant distribution. a one dimensional exampleillustrates that the support is a fractured subset of the non-negative realswith self-similarity properties. | cs.it cs.lg math.it math.st stat.th | nan | 2009-03-16T00:00:00 | 2010-05-28T00:00:00 | ['kar', 'sinopoli', 'moura'] | https://arxiv.org/abs/0903.2890 | 1,386 | 318 |
norm-product belief propagation: primal-dual message-passing for approximate inference | 0903.3127 | in this paper we treat both forms of probabilistic inference, estimatingmarginal probabilities of the joint distribution and finding the most probableassignment, through a unified message-passing algorithm architecture. wegeneralize the belief propagation (bp) algorithms of sum-product andmax-product and tree-rewaighted (trw) sum and max product algorithms (trbp) andintroduce a new set of convergent algorithms based on "convex-free-energy" andlinear-programming (lp) relaxation as a zero-temprature of aconvex-free-energy. the main idea of this work arises from taking a generalperspective on the existing bp and trbp algorithms while observing that theyall are reductions from the basic optimization formula of $f + \sum_i h_i$where the function $f$ is an extended-valued, strictly convex but non-smoothand the functions $h_i$ are extended-valued functions (not necessarily convex).we use tools from convex duality to present the "primal-dual ascent" algorithmwhich is an extension of the bregman successive projection scheme and isdesigned to handle optimization of the general type $f + \sum_i h_i$. mappingthe fractional-free-energy variational principle to this framework introducesthe "norm-product" message-passing. special cases include sum-product andmax-product (bp algorithms) and the trbp algorithms. when thefractional-free-energy is set to be convex (convex-free-energy) thenorm-product is globally convergent for estimating of marginal probabilitiesand for approximating the lp-relaxation. we also introduce another branch ofthe norm-product, the "convex-max-product". the convex-max-product isconvergent (unlike max-product) and aims at solving the lp-relaxation. | cs.ai cs.it math.it | nan | 2009-03-18T00:00:00 | 2010-06-28T00:00:00 | ['hazan', 'shashua'] | https://arxiv.org/abs/0903.3127 | 1,683 | 319 |
on oligopoly spectrum allocation game in cognitive radio networks with capacity constraints | 0903.3278 | dynamic spectrum sharing is a promising technology to improve spectrumutilization in the future wireless networks. the flexible spectrum managementprovides new opportunities for licensed primary user and unlicensed secondaryusers to reallocate the spectrum resource efficiently. in this paper, wepresent an oligopoly pricing framework for dynamic spectrum allocation in whichthe primary users sell excessive spectrum to the secondary users for monetaryreturn. we present two approaches, the strict constraints (type-i) and the qospenalty (type-ii), to model the realistic situation that the primary users havelimited capacities. in the oligopoly model with strict constraints, we proposea low-complexity searching method to obtain the nash equilibrium and prove itsuniqueness. when reduced to a duopoly game, we analytically show theinteresting gaps in the leader-follower pricing strategy. in the qos penaltybased oligopoly model, a novel variable transformation method is developed toderive the unique nash equilibrium. when the market information is limited, weprovide three myopically optimal algorithms "strictbest", "strictbr" and"qosbest" that enable price adjustment for duopoly primary users based on thebest response function (brf) and the bounded rationality (br) principles.numerical results validate the effectiveness of our analysis and demonstratethe fast convergence of "strictbest" as well as "qosbest" to the nashequilibrium. for the "strictbr" algorithm, we reveal the chaotic behaviors ofdynamic price adaptation in response to the learning rates. | cs.ni cs.gt | 10.1016/j.comnet.2009.11.018 | 2009-03-19T00:00:00 | 2009-06-15T00:00:00 | ['xu', 'lui', 'chiu'] | https://arxiv.org/abs/0903.3278 | 1,567 | 320 |
combinatorial ricci curvature and laplacians for image processing | 0903.3676 | a new combinatorial ricci curvature and laplacian operators for grayscaleimages are introduced and tested on 2d synthetic, natural and medical images.analogue formulae for voxels are also obtained. these notions are based uponmore general concepts developed by r. forman. further applications, inparticular a fitting ricci flow, are discussed. | cs.cv cs.cg | nan | 2009-03-23T00:00:00 | null | ['saucan', 'appleboilm', 'wolansky', 'zeevi'] | https://arxiv.org/abs/0903.3676 | 343 | 321 |
switcher-random-walks: a cognitive-inspired mechanism for network exploration | 0903.4132 | semantic memory is the subsystem of human memory that stores knowledge ofconcepts or meanings, as opposed to life specific experiences. the organizationof concepts within semantic memory can be understood as a semantic network,where the concepts (nodes) are associated (linked) to others depending onperceptions, similarities, etc. lexical access is the complementary part ofthis system and allows the retrieval of such organized knowledge. whileconceptual information is stored under certain underlying organization (andthus gives rise to a specific topology), it is crucial to have an accurateaccess to any of the information units, e.g. the concepts, for efficientlyretrieving semantic information for real-time needings. an example of aninformation retrieval process occurs in verbal fluency tasks, and it is knownto involve two different mechanisms: -clustering-, or generating words within asubcategory, and, when a subcategory is exhausted, -switching- to a newsubcategory. we extended this approach to random-walking on a network(clustering) in combination to jumping (switching) to any node with certainprobability and derived its analytical expression based on markov chains.results show that this dual mechanism contributes to optimize the explorationof different network models in terms of the mean first passage time.additionally, this cognitive inspired dual mechanism opens a new framework tobetter understand and evaluate exploration, propagation and transport phenomenain other complex systems where switching-like phenomena are feasible. | cs.ai cond-mat.dis-nn physics.soc-ph | 10.1142/s0218127410026204 | 2009-03-24T00:00:00 | null | ['goñi', 'martincorena', 'corominas-murtra', 'arrondo', 'ardanza-trevijano', 'villoslada'] | https://arxiv.org/abs/0903.4132 | 1,555 | 322 |
sepia: security through private information aggregation | 0903.4258 | secure multiparty computation (mpc) allows joint privacy-preservingcomputations on data of multiple parties. although mpc has been studiedsubstantially, building solutions that are practical in terms of computationand communication cost is still a major challenge. in this paper, weinvestigate the practical usefulness of mpc for multi-domain network securityand monitoring. we first optimize mpc comparison operations for processing highvolume data in near real-time. we then design privacy-preserving protocols forevent correlation and aggregation of network traffic statistics, such asaddition of volume metrics, computation of feature entropy, and distinct itemcount. optimizing performance of parallel invocations, we implement ourprotocols along with a complete set of basic operations in a library calledsepia. we evaluate the running time and bandwidth requirements of our protocolsin realistic settings on a local cluster as well as on planetlab and show thatthey work in near real-time for up to 140 input providers and 9 computationnodes. compared to implementations using existing general-purpose mpcframeworks, our protocols are significantly faster, requiring, for example, 3minutes for a task that takes 2 days with general-purpose frameworks. thisimprovement paves the way for new applications of mpc in the area ofnetworking. finally, we run sepia's protocols on real traffic traces of 17networks and show how they provide new possibilities for distributedtroubleshooting and early anomaly detection. | cs.ni cs.cr | nan | 2009-03-25T00:00:00 | 2010-02-16T00:00:00 | ['burkhart', 'strasser', 'many', 'dimitropoulos'] | https://arxiv.org/abs/0903.4258 | 1,517 | 323 |
on decidability properties of one-dimensional cellular automata | 0903.4615 | in a recent paper sutner proved that the first-order theory of thephase-space $\mathcal{s}_\mathcal{a}=(q^\mathbb{z}, \longrightarrow)$ of aone-dimensional cellular automaton $\mathcal{a}$ whose configurations areelements of $q^\mathbb{z}$, for a finite set of states $q$, and where$\longrightarrow$ is the "next configuration relation", is decidable. he askedwhether this result could be extended to a more expressive logic. we prove inthis paper that this is actuallly the case. we first show that, for eachone-dimensional cellular automaton $\mathcal{a}$, the phase-space$\mathcal{s}_\mathcal{a}$ is an omega-automatic structure. then, applyingrecent results of kuske and lohrey on omega-automatic structures, it followsthat the first-order theory, extended with some counting and cardinalityquantifiers, of the structure $\mathcal{s}_\mathcal{a}$, is decidable. we givesome examples of new decidable properties for one-dimensional cellularautomata. in the case of surjective cellular automata, some more efficientalgorithms can be deduced from results of kuske and lohrey on structures ofbounded degree. on the other hand we show that the case of cellular automatagive new results on automatic graphs. | cs.lo cs.cc math.lo | nan | 2009-03-26T00:00:00 | 2009-06-01T00:00:00 | ['finkel'] | https://arxiv.org/abs/0903.4615 | 1,205 | 324 |
methods for detection and characterization of signals in noisy data with the hilbert-huang transform | 0903.4616 | the hilbert-huang transform is a novel, adaptive approach to time seriesanalysis that does not make assumptions about the data form. its adaptive,local character allows the decomposition of non-stationary signals withhightime-frequency resolution but also renders it susceptible to degradationfrom noise. we show that complementing the hht with techniques such aszero-phase filtering, kernel density estimation and fourier analysis allows itto be used effectively to detect and characterize signals with low signal tonoise ratio. | physics.data-an cs.na gr-qc | 10.1103/physrevd.79.124022 | 2009-03-26T00:00:00 | null | ['stroeer', 'cannizzo', 'camp', 'gagarin'] | https://arxiv.org/abs/0903.4616 | 529 | 325 |
sqs-graphs of extended 1-perfect codes | 0903.5049 | a binary extended 1-perfect code $\mathcal c$ folds over its kernel via thesteiner quadruple systems associated with its codewords. the resulting folding,proposed as a graph invariant for $\mathcal c$, distinguishes among the 361nonlinear codes $\mathcal c$ of kernel dimension $\kappa$ with $9\geq\kappa\geq5$ obtained via solov'eva-phelps doubling construction. each of the 361resulting graphs has most of its nonloop edges expressible in terms of thelexicographically disjoint quarters of the products of the components of two ofthe ten 1-perfect partitions of length 8 classified by phelps, and loops mostlyexpressible in terms of the lines of the fano plane. | math.co cs.it math.it | nan | 2009-03-29T00:00:00 | 2010-02-16T00:00:00 | ['dejter'] | https://arxiv.org/abs/0903.5049 | 663 | 326 |
complex dependencies in large software systems | 0904.0087 | two large, open source software systems are analyzed from the vantage pointof complex adaptive systems theory. for both systems, the full dependencygraphs are constructed and their properties are shown to be consistent with theassumption of stochastic growth. in particular, the afferent links aredistributed according to zipf's law for both systems. using the small-worldcriterion for directed graphs, it is shown that contrary to claims in theliterature, these software systems do not possess small-world properties.furthermore, it is argued that the small-world property is not of anyparticular advantage in a standard layered architecture. finally, it issuggested that the eigenvector centrality can play an important role indeciding which open source software packages to use in mission criticalapplications. this comes about because knowing the absolute number of afferentlinks alone is insufficient to decide how important a package is to the systemas a whole, instead the importance of the linking package plays a major role aswell. | nlin.ao cs.se physics.soc-ph | nan | 2009-04-01T00:00:00 | 2009-09-16T00:00:00 | ['kohring'] | https://arxiv.org/abs/0904.0087 | 1,040 | 327 |
differential reduction of generalized hypergeometric functions from feynman diagrams: one-variable case | 0904.0214 | the differential-reduction algorithm, which allows one to express generalizedhypergeometric functions with parameters of arbitrary values in terms of suchfunctions with parameters whose values differ from the original ones byintegers, is discussed in the context of evaluating feynman diagrams. wherethis is possible, we compare our results with those obtained using standardtechniques. it is shown that the criterion of reducibility of multiloop feynmanintegrals can be reformulated in terms of the criterion of reducibility ofhypergeometric functions. the relation between the numbers of master integralsobtained by differential reduction and integration by parts is discussed. | hep-th cs.sc hep-ph math-ph math.ca math.mp | 10.1016/j.nuclphysb.2010.03.025 | 2009-04-01T00:00:00 | 2010-03-31T00:00:00 | ['bytev', 'kalmykov', 'kniehl'] | https://arxiv.org/abs/0904.0214 | 679 | 328 |
approximability of sparse integer programs | 0904.0859 | the main focus of this paper is a pair of new approximation algorithms forcertain integer programs. first, for covering integer programs {min cx: ax >=b, 0 <= x <= d} where a has at most k nonzeroes per row, we give ak-approximation algorithm. (we assume a, b, c, d are nonnegative.) for any k >=2 and eps>0, if p != np this ratio cannot be improved to k-1-eps, and under theunique games conjecture this ratio cannot be improved to k-eps. one key idea isto replace individual constraints by others that have better roundingproperties but the same nonnegative integral solutions; another criticalingredient is knapsack-cover inequalities. second, for packing integer programs{max cx: ax <= b, 0 <= x <= d} where a has at most k nonzeroes per column, wegive a (2k^2+2)-approximation algorithm. our approach builds on the iterated lprelaxation framework. in addition, we obtain improved approximations for thesecond problem when k=2, and for both problems when every a_{ij} is smallcompared to b_i. finally, we demonstrate a 17/16-inapproximability for coveringinteger programs with at most two nonzeroes per column. | cs.ds cs.dm | nan | 2009-04-06T00:00:00 | 2010-02-09T00:00:00 | ['pritchard', 'chakrabarty'] | https://arxiv.org/abs/0904.0859 | 1,113 | 329 |
boosting the accuracy of differentially-private histograms through consistency | 0904.0942 | we show that it is possible to significantly improve the accuracy of ageneral class of histogram queries while satisfying differential privacy. ourapproach carefully chooses a set of queries to evaluate, and then exploitsconsistency constraints that should hold over the noisy output. in apost-processing phase, we compute the consistent input most likely to haveproduced the noisy output. the final output is differentially-private andconsistent, but in addition, it is often much more accurate. we show, boththeoretically and experimentally, that these techniques can be used forestimating the degree sequence of a graph very precisely, and for computing ahistogram that can support arbitrary range queries accurately. | cs.db cs.cr | nan | 2009-04-06T00:00:00 | 2010-07-08T00:00:00 | ['hay', 'rastogi', 'miklau', 'suciu'] | https://arxiv.org/abs/0904.0942 | 720 | 330 |
the distribution and deposition algorithm for multiple sequences sets | 0904.1242 | sequences set is a mathematical model used in many applications. as thenumber of the sequences becomes larger, single sequence set model is notappropriate for the rapidly increasing problem sizes. for example, more andmore text processing applications separate a single big text file into multiplefiles before processing. for these applications, the underline mathematicalmodel is multiple sequences sets (mss). though there is increasing use of mss,there is little research on how to process mss efficiently. to process multiplesequences sets, sequences are first distributed to different sets, and thensequences for each set are processed. deriving effective algorithm for mssprocessing is both interesting and challenging. in this paper, we have definedthe cost functions and performance ratio for analysis of the quality ofsynthesis sequences. based on these, the problem of process of multiplesequences sets (pmss) is formulated. we have first proposed two greedyalgorithms for the pmss problem, which are based on generalization ofalgorithms for single sequences set. then based on the analysis of thecharacteristics of multiple sequences sets, we have proposed the distributionand deposition (dda) algorithm and dda* algorithm for pmss problem. in ddaalgorithm, the sequences are first distributed to multiple sets according totheir alphabet contents; then sequences in each set are deposited by thedeposition algorithm. the dda* algorithm differs from the dda algorithm in thatthe dda* algorithm distributes sequences by clustering based on sequenceprofiles. experiments show that dda and dda* always output results with smallercosts than other algorithms, and dda* outperforms dda in most instances. thedda and dda* algorithms are also efficient both in time and space. | cs.ds cs.dc cs.dm | nan | 2009-04-07T00:00:00 | 2010-04-29T00:00:00 | ['ning', 'leong'] | https://arxiv.org/abs/0904.1242 | 1,778 | 331 |
on the communication of scientific results: the full-metadata format | 0904.1299 | in this paper, we introduce a scientific format for text-based data files,which facilitates storing and communicating tabular data sets. the so-calledfull-metadata format builds on the widely used ini-standard and is based onfour principles: readable self-documentation, flexible structure, fail-safecompatibility, and searchability. as a consequence, all metadata required tointerpret the tabular data are stored in the same file, allowing for theautomated generation of publication-ready tables and graphs and the semanticsearchability of data file collections. the full-metadata format is introducedon the basis of three comprehensive examples. the complete format and syntax isgiven in the appendix. | cs.dl cs.ir physics.comp-ph physics.ins-det | 10.1016/j.cpc.2009.11.014 | 2009-04-08T00:00:00 | null | ['riede', 'schueppel', 'sylvester-hvid', 'kuehne', 'roettger', 'zimmermann', 'liehr'] | https://arxiv.org/abs/0904.1299 | 703 | 332 |
a unified approach to ranking in probabilistic databases | 0904.1366 | the dramatic growth in the number of application domains that naturallygenerate probabilistic, uncertain data has resulted in a need for efficientlysupporting complex querying and decision-making over such data. in this paper,we present a unified approach to ranking and top-k query processing inprobabilistic databases by viewing it as a multi-criteria optimization problem,and by deriving a set of features that capture the key properties of aprobabilistic dataset that dictate the ranked result. we contend that a single,specific ranking function may not suffice for probabilistic databases, and weinstead propose two parameterized ranking functions, called prf-w and prf-e,that generalize or can approximate many of the previously proposed rankingfunctions. we present novel generating functions-based algorithms forefficiently ranking large datasets according to these ranking functions, evenif the datasets exhibit complex correlations modeled using probabilisticand/xor trees or markov networks. we further propose that the parameters of theranking function be learned from user preferences, and we develop an approachto learn those parameters. finally, we present a comprehensive experimentalstudy that illustrates the effectiveness of our parameterized rankingfunctions, especially prf-e, at approximating other ranking functions and thescalability of our proposed algorithms for exact or approximate ranking. | cs.db cs.ds | nan | 2009-04-08T00:00:00 | 2010-12-15T00:00:00 | ['li', 'saha', 'deshpande'] | https://arxiv.org/abs/0904.1366 | 1,418 | 333 |
towards an explanatory and computational theory of scientific discovery | 0904.1439 | we propose an explanatory and computational theory of transformativediscoveries in science. the theory is derived from a recurring theme found in adiverse range of scientific change, scientific discovery, and knowledgediffusion theories in philosophy of science, sociology of science, socialnetwork analysis, and information science. the theory extends the concept ofstructural holes from social networks to a broader range of associativenetworks found in science studies, especially including networks that reflectunderlying intellectual structures such as co-citation networks andcollaboration networks. the central premise is that connecting otherwisedisparate patches of knowledge is a valuable mechanism of creative thinking ingeneral and transformative scientific discovery in particular. | cs.gl cs.cy | nan | 2009-04-08T00:00:00 | null | ['chen', 'chen', 'horowitz', 'hou', 'liu', 'pellegrino'] | https://arxiv.org/abs/0904.1439 | 794 | 334 |
spatial and temporal correlation of the interference in aloha ad hoc networks | 0904.1444 | interference is a main limiting factor of the performance of a wireless adhoc network. the temporal and the spatial correlation of the interference makesthe outages correlated temporally (important for retransmissions) and spatiallycorrelated (important for routing). in this letter we quantify the temporal andspatial correlation of the interference in a wireless ad hoc network whosenodes are distributed as a poisson point process on the plane when aloha isused as the multiple-access scheme. | cs.it cs.ni math.it math.pr | 10.1109/lcomm.2009.090837 | 2009-04-08T00:00:00 | null | ['ganti', 'haenggi'] | https://arxiv.org/abs/0904.1444 | 495 | 335 |
error bounds for repeat-accumulate codes decoded via linear programming | 0904.1692 | we examine regular and irregular repeat-accumulate (ra) codes with repetitiondegrees which are all even. for these codes and with a particular choice of aninterleaver, we give an upper bound on the decoding error probability of alinear-programming based decoder which is an inverse polynomial in the blocklength. our bound is valid for any memoryless, binary-input, output-symmetric(mbios) channel. this result generalizes the bound derived by feldman et al.,which was for regular ra(2) codes. | cs.it math.it | nan | 2009-04-10T00:00:00 | 2010-02-22T00:00:00 | ['goldenberg', 'burshtein'] | https://arxiv.org/abs/0904.1692 | 493 | 336 |
two designs of space-time block codes achieving full diversity with partial interference cancellation group decoding | 0904.1812 | a partial interference cancellation (pic) group decoding based space-timeblock code (stbc) design criterion was recently proposed by guo and xia, wherethe decoding complexity and the code rate trade-off is dealt when the fulldiversity is achieved. in this paper, two designs of stbc are proposed for anynumber of transmit antennas that can obtain full diversity when a pic groupdecoding (with a particular grouping scheme) is applied at receiver. with thepic group decoding and an appropriate grouping scheme for the decoding, theproposed stbc are shown to obtain the same diversity gain as the ml decoding,but have a low decoding complexity. the first proposed stbc is designed withmultiple diagonal layers and it can obtain the full diversity for two-layerdesign with the pic group decoding and the rate is up to 2 symbols per channeluse. but with pic-sic group decoding, the first proposed stbc can obtain fulldiversity for any number of layers and the rate can be full. the secondproposed stbc can obtain full diversity and a rate up to 9/4 with the pic groupdecoding. some code design examples are given and simulation results show thatthe newly proposed stbc can well address the rate-performance-complexitytradeoff of the mimo systems. | cs.it math.it | nan | 2009-04-11T00:00:00 | 2010-01-04T00:00:00 | ['zhang', 'xu', 'xia'] | https://arxiv.org/abs/0904.1812 | 1,242 | 337 |
seidel minor, permutation graphs and combinatorial properties | 0904.1923 | a permutation graph is an intersection graph of segments lying between twoparallel lines. a seidel complementation of a finite graph at one of it vertex$v$ consists to complement the edges between the neighborhood and thenon-neighborhood of $v$. two graphs are seidel complement equivalent if one canbe obtained from the other by a successive application of seidelcomplementation. in this paper we introduce the new concept of seidel complementation andseidel minor, we then show that this operation preserves cographs and thestructure of modular decomposition. the main contribution of this paper is toprovide a new and succinct characterization of permutation graphs i.e. a graphis a permutation graph \iff it does not contain the following graphs: $c_5$,$c_7$, $xf_{6}^{2}$, $xf_{5}^{2n+3}$, $c_{2n}, n\geqslant6$ and theircomplement as seidel minor. in addition we provide a $o(n+m)$-time algorithm tooutput one of the forbidden seidel minor if the graph is not a permutationgraph. | cs.dm | nan | 2009-04-13T00:00:00 | 2010-06-21T00:00:00 | ['limouzy'] | https://arxiv.org/abs/0904.1923 | 986 | 338 |
boosting through optimization of margin distributions | 0904.2037 | boosting has attracted much research attention in the past decade. thesuccess of boosting algorithms may be interpreted in terms of the margintheory. recently it has been shown that generalization error of classifiers canbe obtained by explicitly taking the margin distribution of the training datainto account. most of the current boosting algorithms in practice usuallyoptimizes a convex loss function and do not make use of the margindistribution. in this work we design a new boosting algorithm, termedmargin-distribution boosting (mdboost), which directly maximizes the averagemargin and minimizes the margin variance simultaneously. this way the margindistribution is optimized. a totally-corrective optimization algorithm based oncolumn generation is proposed to implement mdboost. experiments on uci datasetsshow that mdboost outperforms adaboost and lpboost in most cases. | cs.lg cs.cv | nan | 2009-04-13T00:00:00 | 2010-01-06T00:00:00 | ['shen', 'li'] | https://arxiv.org/abs/0904.2037 | 881 | 339 |
on stratified regions | 0904.2076 | type and effect systems are a tool to analyse statically the behaviour ofprograms with effects. we present a proof based on the so called reducibilitycandidates that a suitable stratification of the type and effect system entailsthe termination of the typable programs. the proof technique covers a simplytyped, multi-threaded, call-by-value lambda-calculus, equipped with a varietyof scheduling (preemptive, cooperative) and interaction mechanisms (references,channels, signals). | cs.lo | nan | 2009-04-14T00:00:00 | 2009-06-09T00:00:00 | ['amadio'] | https://arxiv.org/abs/0904.2076 | 480 | 340 |
on irreversible dynamic monopolies in general graphs | 0904.2306 | consider the following coloring process in a simple directed graph $g(v,e)$with positive indegrees. initially, a set $s$ of vertices are white, whereasall the others are black. thereafter, a black vertex is colored white whenevermore than half of its in-neighbors are white. the coloring process ends when noadditional vertices can be colored white. if all vertices end up white, we call$s$ an irreversible dynamic monopoly (or dynamo for short) under thestrict-majority scenario. an irreversible dynamo under the simple-majorityscenario is defined similarly except that a black vertex is colored white whenat least half of its in-neighbors are white. we derive upper bounds of$(2/3)\,|\,v\,|$ and $|\,v\,|/2$ on the minimum sizes of irreversible dynamosunder the strict and the simple-majority scenarios, respectively. for thespecial case when $g$ is an undirected connected graph, we prove the existenceof an irreversible dynamo with size at most $\lceil |\,v\,|/2 \rceil$ under thestrict-majority scenario. let $\epsilon>0$ be any constant. we also show that,unless $\text{np}\subseteq \text{time}(n^{o(\ln \ln n)}),$ no polynomial-time,$((1/2-\epsilon)\ln |\,v\,|)$-approximation algorithms exist for finding theminimum irreversible dynamo under either the strict or the simple-majorityscenario. the inapproximability results hold even for bipartite graphs withdiameter at most 8. | cs.dm cs.dc | nan | 2009-04-15T00:00:00 | 2010-03-09T00:00:00 | ['chang', 'lyuu'] | https://arxiv.org/abs/0904.2306 | 1,384 | 341 |
what does newcomb's paradox teach us? | 0904.2540 | in newcomb's paradox you choose to receive either the contents of aparticular closed box, or the contents of both that closed box and another one.before you choose though, an antagonist uses a prediction algorithm to deduceyour choice, and fills the two boxes based on that deduction. newcomb's paradoxis that game theory's expected utility and dominance principles appear toprovide conflicting recommendations for what you should choose. a recentextension of game theory provides a powerful tool for resolving paradoxesconcerning human choice, which formulates such paradoxes in terms of bayesnets. here we apply this to ol to newcomb's scenario. we show that theconflicting recommendations in newcomb's scenario use different bayes nets torelate your choice and the algorithm's prediction. these two bayes nets areincompatible. this resolves the paradox: the reason there appears to be twoconflicting recommendations is that the specification of the underlying bayesnet is open to two, conflicting interpretations. we then show that the accuracyof the prediction algorithm in newcomb's paradox, the focus of much previouswork, is irrelevant. we similarly show that the utility functions of you andthe antagonist are irrelevant. we end by showing that newcomb's paradox istime-reversal invariant; both the paradox and its resolution are unchanged ifthe algorithm makes its `prediction' \emph{after} you make your choice ratherthan before. | cs.gt | nan | 2009-04-16T00:00:00 | 2010-09-30T00:00:00 | ['benford'] | https://arxiv.org/abs/0904.2540 | 1,439 | 342 |
interference relay channels - part ii: power allocation games | 0904.2587 | in the first part of this paper we have derived achievable transmission ratesfor the (single-band) interference relay channel (irc) when the relayimplements either the amplify-and-forward, decode-and-forward orestimate-and-forward protocol. here, we consider wireless networks that can bemodeled by a multi-band irc. we tackle the existence issue of nash equilibria(ne) in these networks where each information source is assumed to selfishlyallocate its power between the available bands in order to maximize itsindividual transmission rate. interestingly, it is possible to show that thethree power allocation (pa) games (corresponding to the three protocolsassumed) under investigation are concave, which guarantees the existence of apure ne after rosen [3]. then, as the relay can also optimize severalparameters e.g., its position and transmit power, it is further considered asthe leader of a stackelberg game where the information sources are thefollowers. our theoretical analysis is illustrated by simulations giving moreinsights on the addressed issues. | cs.it math.it | nan | 2009-04-16T00:00:00 | 2010-11-20T00:00:00 | ['belmega', 'djeumou', 'lasaulce'] | https://arxiv.org/abs/0904.2587 | 1,062 | 343 |
decision problems for nash equilibria in stochastic games | 0904.3325 | we analyse the computational complexity of finding nash equilibria instochastic multiplayer games with $\omega$-regular objectives. while theexistence of an equilibrium whose payoff falls into a certain interval may beundecidable, we single out several decidable restrictions of the problem.first, restricting the search space to stationary, or pure stationary,equilibria results in problems that are typically contained in pspace and np,respectively. second, we show that the existence of an equilibrium with abinary payoff (i.e. an equilibrium where each player either wins or loses withprobability 1) is decidable. we also establish that the existence of a nashequilibrium with a certain binary payoff entails the existence of anequilibrium with the same payoff in pure, finite-state strategies. | cs.gt cs.lo | 10.1007/978-3-642-04027-6_37 | 2009-04-21T00:00:00 | 2009-06-08T00:00:00 | ['ummels', 'wojtczak'] | https://arxiv.org/abs/0904.3325 | 798 | 344 |
relation between the usual order and the enumeration orders of elements of r.e. sets | 0904.3607 | in this paper, we have compared r.e. sets based on their enumeration orderswith turing machines. accordingly, we have defined novel concept uniformity forturing machines and r.e. sets and have studied some relationships betweenuniformity and both one-reducibility and turing reducibility. furthermore, wehave defined type-2 uniformity concept and studied r.e. sets and turingmachines based on this concept. in the end, we have introduced a new structurecalled turing output binary search tree that helps us lighten some ideas. | cs.fl cs.cc math.lo | nan | 2009-04-23T00:00:00 | null | ['safilian', 'didehvar'] | https://arxiv.org/abs/0904.3607 | 526 | 345 |
variations of the turing test in the age of internet and virtual reality | 0904.3612 | inspired by hofstadter's coffee-house conversation (1982) and by the sciencefiction short story sam by schattschneider (1988), we propose and discusscriteria for non-mechanical intelligence. firstly, we emphasize the practicalneed for such tests in view of massively multiuser online role-playing games(mmorpgs) and virtual reality systems like second life. secondly, wedemonstrate second life as a useful framework for implementing (some iterationsof) that test. | cs.ai cs.hc | 10.1007/978-3-642-04617-9_45 | 2009-04-23T00:00:00 | null | ['neumann', 'reichenberger', 'ziegler'] | https://arxiv.org/abs/0904.3612 | 463 | 346 |
successive difference substitution based on column stochastic matrix and mechanical decision for positive semi-definite forms | 0904.4030 | the theory part of this paper is sketched as follows. based on columnstochastic average matrix $t_n$ selected as a basic substitution matrix, themethod of advanced successive difference substitution is established. then, aset of necessary and sufficient conditions for deciding positive semi-definiteform on $\r^n_+$ is derived from this method. and furthermore, it is provedthat the sequence of sds sets of a positive definite form is positivelyterminating. worked out according to these results, the maple program tsds3 not onlyautomatically proves the polynomial inequalities, but also outputs counterexamples for the false. sometimes tsds3 does not halt, but it is very useful byexperimenting on so many examples. | cs.sc | nan | 2009-04-27T00:00:00 | 2010-04-02T00:00:00 | ['yao'] | https://arxiv.org/abs/0904.4030 | 718 | 347 |
multihomogeneous resultant formulae for systems with scaled support | 0904.4064 | constructive methods for matrices of multihomogeneous (or multigraded)resultants for unmixed systems have been studied by weyman, zelevinsky,sturmfels, dickenstein and emiris. we generalize these constructions to mixedsystems, whose newton polytopes are scaled copies of one polytope, thus takinga step towards systems with arbitrary supports. first, we specify matriceswhose determinant equals the resultant and characterize the systems that admitsuch formulae. bezout-type determinantal formulae do not exist, but we describeall possible sylvester-type and hybrid formulae. we establish tight bounds forall corresponding degree vectors, and specify domains that will surely containsuch vectors; the latter are new even for the unmixed case. second, we make useof multiplication tables and strong duality theory to specify resultantmatrices explicitly, for a general scaled system, thus including unmixedsystems. the encountered matrices are classified; these include a new type ofsylvester-type matrix as well as bezout-type matrices, known as partialbezoutians. our public-domain maple implementation includes efficient storageof complexes in memory, and construction of resultant matrices. | cs.sc math.ag | nan | 2009-04-26T00:00:00 | 2010-02-03T00:00:00 | ['emiris', 'mantzaflaris'] | https://arxiv.org/abs/0904.4064 | 1,193 | 348 |
honei: a collection of libraries for numerical computations targeting multiple processor architectures | 0904.4152 | we present honei, an open-source collection of libraries offering a hardwareoriented approach to numerical calculations. honei abstracts the hardware, andapplications written on top of honei can be executed on a wide range ofcomputer architectures such as cpus, gpus and the cell processor. wedemonstrate the flexibility and performance of our approach with two testapplications, a finite element multigrid solver for the poisson problem and arobust and fast simulation of shallow water waves. by linking against honei'slibraries, we achieve a twofold speedup over straight forward c++ code usinghonei's sse backend, and additional 3-4 and 4-16 times faster execution on thecell and a gpu. a second important aspect of our approach is that the fullperformance capabilities of the hardware under consideration can be exploitedby adding optimised application-specific operations to the honei libraries.honei provides all necessary infrastructure for development and evaluation ofsuch kernels, significantly simplifying their development. | cs.ms | 10.1016/j.cpc.2009.04.018 | 2009-04-27T00:00:00 | null | ['van dyk', 'geveler', 'mallach', 'ribbrock', 'goeddeke', 'gutwenger'] | https://arxiv.org/abs/0904.4152 | 1,035 | 349 |
fundamentals of the backoff process in 802.11: dichotomy of the aggregation | 0904.4155 | this paper discovers fundamental principles of the backoff process thatgoverns the performance of ieee 802.11. a simplistic principle founded uponregular variation theory is that the backoff time has a truncated pareto-typetail distribution with an exponent of $(\log \gamma)/\log m$ ($m$ is themultiplicative factor and $\gamma$ is the collision probability). this revealsthat the per-node backoff process is heavy-tailed in the strict sense for$\gamma>1/m^2$, and paves the way for the following unifying result. the state-of-the-art theory on the superposition of the heavy-tailedprocesses is applied to establish a dichotomy exhibited by the aggregatebackoff process, putting emphasis on the importance of time-scale on which weview the backoff processes. while the aggregation on normal time-scales leadsto a poisson process, it is approximated by a new limiting process possessinglong-range dependence (lrd) on coarse time-scales. this dichotomy turns out tobe instrumental in formulating short-term fairness, extending existing formulasto arbitrary population, and to elucidate the absence of lrd in practicalsituations. a refined wavelet analysis is conducted to strengthen thisargument. | cs.ni cs.pf | nan | 2009-04-27T00:00:00 | 2010-08-20T00:00:00 | ['cho', 'jiang'] | https://arxiv.org/abs/0904.4155 | 1,196 | 350 |
dictionary identification - sparse matrix-factorisation via $\ell_1$-minimisation | 0904.4774 | this article treats the problem of learning a dictionary providing sparserepresentations for a given signal class, via $\ell_1$-minimisation. theproblem can also be seen as factorising a $\ddim \times \nsig$ matrix $y=(y_1>... y_\nsig), y_n\in \r^\ddim$ of training signals into a $\ddim \times\natoms$ dictionary matrix $\dico$ and a $\natoms \times \nsig$ coefficientmatrix $\x=(x_1... x_\nsig), x_n \in \r^\natoms$, which is sparse. the exactquestion studied here is when a dictionary coefficient pair $(\dico,\x)$ can berecovered as local minimum of a (nonconvex) $\ell_1$-criterion with input$y=\dico \x$. first, for general dictionaries and coefficient matrices,algebraic conditions ensuring local identifiability are derived, which are thenspecialised to the case when the dictionary is a basis. finally, assuming arandom bernoulli-gaussian sparse model on the coefficient matrix, it is shownthat sufficiently incoherent bases are locally identifiable with highprobability. the perhaps surprising result is that the typically sufficientnumber of training samples $\nsig$ grows up to a logarithmic factor onlylinearly with the signal dimension, i.e. $\nsig \approx c \natoms \log\natoms$, in contrast to previous approaches requiring combinatorially manysamples. | cs.it cs.lg math.it | nan | 2009-04-30T00:00:00 | 2010-03-01T00:00:00 | ['gribonval', 'schnass'] | https://arxiv.org/abs/0904.4774 | 1,268 | 351 |
millimeter-wave system for high data rate indoor communications | 0905.0315 | this paper presents the realization of a wireless gigabit ethernetcommunication system operating in the 60 ghz band. the system architecture usesa single carrier modulation. a differential encoded binary phase shift keyingmodulation and a differential demodulation scheme are adopted for theintermediate frequency blocks. the baseband blocks use reed- solomon rs (255,239) coding and decoding for channel forward error correction (fec). firstresults of bit error rate (ber) measurements at 875 mbps, without channelcoding, are presented for different antennas. | cs.ni | nan | 2009-05-04T00:00:00 | null | ['rakotondrainibe', 'kokar', 'zaharia', 'zein'] | https://arxiv.org/abs/0905.0315 | 560 | 352 |
a low complexity wireless gigabit ethernet ifof 60 ghz h/w platform and issues | 0905.0316 | this paper proposes a complete ifof system architecture derived fromsimplified ieee802.15.3c phy layer proposal to successfully ensure near 1 gbpson the air interface. the system architecture utilizes low complexity basebandprocessing modules. the byte/frame synchronization technique is designed toprovide a high value of preamble detection probability and a very small valueof the false detection probability. conventional reed-solomon rs (255, 239)coding is used for channel forward error correction (fec). good communicationlink quality and bit error rate (ber) results at 875 mbps are achieved withdirectional antennas. | cs.ni | nan | 2009-05-04T00:00:00 | null | ['rakotondrainibe', 'siaud', 'kokar', 'zaharia', 'brunet', 'tanguy', 'zein'] | https://arxiv.org/abs/0905.0316 | 624 | 353 |
60 ghz high data rate wireless communication system | 0905.0317 | this paper presents the design and the realization of a 60 ghz wirelessgigabit ethernet communication system. a differential encoded binary phaseshift keying modulation (dbpsk) and differential demodulation schemes areadopted for the if blocks. the gigabit ethernet interface allows a high speedtransfer of multimedia files via a 60 ghz wireless link. first measurementresults are shown for 875 mbps data rate. | cs.ni | nan | 2009-05-04T00:00:00 | null | ['rakotondrainibe', 'kokar', 'zaharia', 'zein'] | https://arxiv.org/abs/0905.0317 | 410 | 354 |
fully-functional static and dynamic succinct trees | 0905.0768 | we propose new succinct representations of ordinal trees, which have beenstudied extensively. it is known that any $n$-node static tree can berepresented in $2n + o(n)$ bits and a number of operations on the tree can besupported in constant time under the word-ram model. however the datastructures are complicated and difficult to dynamize. we propose a simple andflexible data structure, called the range min-max tree, that reduces the largenumber of relevant tree operations considered in the literature to a fewprimitives that are carried out in constant time on sufficiently small trees.the result is extended to trees of arbitrary size, achieving $2n + o(n/\polylog(n))$ bits of space. the redundancy is significantly lower than anyprevious proposal. our data structure builds on the range min-max tree toachieve $2n+o(n/\log n)$ bits of space and $o(\log n)$ time for all theoperations. we also propose an improved data structure using $2n+o(n\log\logn/\log n)$ bits and improving the time to the optimal $o(\log n/\log \log n)$for most operations. furthermore, we support sophisticated operations thatallow attaching and detaching whole subtrees, in time $\order(\log^{1+\epsilon}n / \log\log n)$. our techniques are of independent interest. one allowsrepresenting dynamic bitmaps and sequences supporting rank/select and indels,within zero-order entropy bounds and optimal time $o(\log n / \log\log n)$ forall operations on bitmaps and polylog-sized alphabets, and $o(\log n \log\sigma / (\log\log n)^2)$ on larger alphabet sizes $\sigma$. this improves uponthe best existing bounds for entropy-bounded storage of dynamic sequences,compressed full-text self-indexes, and compressed-space construction of theburrows-wheeler transform. | cs.ds | nan | 2009-05-06T00:00:00 | 2010-09-23T00:00:00 | ['navarro', 'sadakane'] | https://arxiv.org/abs/0905.0768 | 1,742 | 355 |
citation entropy and research impact estimation | 0905.1039 | a new indicator, a real valued $s$-index, is suggested to characterize aquality and impact of the scientific research output. it is expected to be atleast as useful as the notorious $h$-index, at the same time avoiding some itsobvious drawbacks. however, surprisingly, the $h$-index is found to be quite agood indicator for majority of real-life citation data with their allegedzipfian behaviour for which these drawbacks do not show up. the style of thepaper was chosen deliberately somewhat frivolous to indicate that any attemptto characterize the scientific output of a researcher by just one number alwayshas an element of a grotesque game in it and should not be taken too seriously.i hope this frivolous style will be perceived as a funny decoration only. | physics.soc-ph cond-mat.stat-mech cs.dl | nan | 2009-05-07T00:00:00 | 2010-09-18T00:00:00 | ['silagadze'] | https://arxiv.org/abs/0905.1039 | 762 | 356 |
saddle-point solution of the fingerprinting capacity game under the marking assumption | 0905.1375 | we study a fingerprinting game in which the collusion channel is unknown. theencoder embeds fingerprints into a host sequence and provides the decoder withthe capability to trace back pirated copies to the colluders. fingerprinting capacity has recently been derived as the limit value of asequence of maxmin games with mutual information as the payoff function.however, these games generally do not admit saddle-point solutions and are veryhard to solve numerically. here under the so-called boneh-shaw markingassumption, we reformulate the capacity as the value of a single two-personzero-sum game, and show that it is achieved by a saddle-point solution. if the maximal coalition size is $k$ and the fingerprint alphabet is binary,we derive equations that can numerically solve the capacity game for arbitrary$k$. we also provide tight upper and lower bounds on the capacity. finally, wediscuss the asymptotic behavior of the fingerprinting game for large $k$ andpractical implementation issues. | cs.it cs.cr math.it | 10.1109/isit.2009.5205882 | 2009-05-09T00:00:00 | null | ['huang', 'moulin'] | https://arxiv.org/abs/0905.1375 | 1,000 | 357 |
unsatisfiable linear cnf formulas are large and complex | 0905.1587 | we call a cnf formula linear if any two clauses have at most one variable incommon. we show that there exist unsatisfiable linear k-cnf formulas with atmost 4k^2 4^k clauses, and on the other hand, any linear k-cnf formula with atmost 4^k/(8e^2k^2) clauses is satisfiable. the upper bound uses probabilisticmeans, and we have no explicit construction coming even close to it. one reasonfor this is that unsatisfiable linear formulas exhibit a more complex structurethan general (non-linear) formulas: first, any treelike resolution refutationof any unsatisfiable linear k-cnf formula has size at least 2^(2^(k/2-1))$.this implies that small unsatisfiable linear k-cnf formulas are hard instancesfor davis-putnam style splitting algorithms. second, if we require that theformula f have a strict resolution tree, i.e. every clause of f is used onlyonce in the resolution tree, then we need at least a^a^...^a clauses, where ais approximately 2 and the height of this tower is roughly k. | cs.dm | nan | 2009-05-11T00:00:00 | 2010-10-28T00:00:00 | ['scheder'] | https://arxiv.org/abs/0905.1587 | 984 | 358 |
information ranking and power laws on trees | 0905.1738 | we study the situations when the solution to a weighted stochastic recursionhas a power law tail. to this end, we develop two complementary approaches, thefirst one extends goldie's (1991) implicit renewal theorem to cover recursionson trees; and the second one is based on a direct sample path large deviationsanalysis of weighted recursive random sums. we believe that these methods maybe of independent interest in the analysis of more general weighted branchingprocesses as well as in the analysis of algorithms. | math.pr cs.pf | nan | 2009-05-11T00:00:00 | 2010-07-28T00:00:00 | ['jelenkovic', 'olvera-cravioto'] | https://arxiv.org/abs/0905.1738 | 516 | 359 |
aligning graphs and finding substructures by a cavity approach | 0905.1893 | we introduce a new distributed algorithm for aligning graphs or findingsubstructures within a given graph. it is based on the cavity method and isused to study the maximum-clique and the graph-alignment problems in randomgraphs. the algorithm allows to analyze large graphs and may find applicationsin fields such as computational biology. as a proof of concept we use ouralgorithm to align the similarity graphs of two interacting protein familiesinvolved in bacterial signal transduction, and to predict actually interactingprotein partners between these families. | q-bio.qm cond-mat.stat-mech cs.ds | 10.1209/0295-5075/89/37009 | 2009-05-12T00:00:00 | 2010-04-01T00:00:00 | ['bradde', 'braunstein', 'mahmoudi', 'tria', 'weigt', 'zecchina'] | https://arxiv.org/abs/0905.1893 | 566 | 360 |
protection against link errors and failures using network coding | 0905.2248 | we propose a network-coding based scheme to protect multiple bidirectionalunicast connections against adversarial errors and failures in a network. thenetwork consists of a set of bidirectional primary path connections that carrythe uncoded traffic. the end nodes of the bidirectional connections areconnected by a set of shared protection paths that provide the redundancyrequired for protection. such protection strategies are employed in the domainof optical networks for recovery from failures. in this work we consider theproblem of simultaneous protection against adversarial errors and failures. suppose that n_e paths are corrupted by the omniscient adversary. under ourproposed protocol, the errors can be corrected at all the end nodes with 4n_eprotection paths. more generally, if there are n_e adversarial errors and n_ffailures, 4n_e + 2n_f protection paths are sufficient. the number of protectionpaths only depends on the number of errors and failures being protected againstand is independent of the number of unicast connections. | cs.it cs.ni math.it | nan | 2009-05-14T00:00:00 | 2010-08-31T00:00:00 | ['li', 'ramamoorthy'] | https://arxiv.org/abs/0905.2248 | 1,047 | 361 |
residus de 2-formes differentielles sur les surfaces algebriques et applications aux codes correcteurs d'erreurs | 0905.2311 | the theory of algebraic-geometric codes has been developed in the beginningof the 80's after a paper of v.d. goppa. given a smooth projective algebraiccurve x over a finite field, there are two different constructions oferror-correcting codes. the first one, called "functional", uses some rationalfunctions on x and the second one, called "differential", involves somerational 1-forms on this curve. hundreds of papers are devoted to the study ofsuch codes. in addition, a generalization of the functional construction for algebraicvarieties of arbitrary dimension is given by y. manin in an article of 1984. afew papers about such codes has been published, but nothing has been doneconcerning a generalization of the differential construction to thehigher-dimensional case. in this thesis, we propose a differential construction of codes on algebraicsurfaces. afterwards, we study the properties of these codes and particularlytheir relations with functional codes. a pretty surprising fact is that a maindifference with the case of curves appears. indeed, if in the case of curves, adifferential code is always the orthogonal of a functional one, this assertiongenerally fails for surfaces. last observation motivates the study of codeswhich are the orthogonal of some functional code on a surface. therefore, weprove that, under some condition on the surface, these codes can be realized assums of differential codes. moreover, we show that some answers to some openproblems "a la bertini" could give very interesting informations on theparameters of these codes. | math.ag cs.it math.it | nan | 2009-05-14T00:00:00 | null | ['couvreur'] | https://arxiv.org/abs/0905.2311 | 1,569 | 362 |
combinatorial information distance | 0905.2386 | let $|a|$ denote the cardinality of a finite set $a$. for any real number $x$define $t(x)=x$ if $x\geq1$ and 1 otherwise. for any finite sets $a,b$ let$\delta(a,b)$ $=$ $\log_{2}(t(|b\cap\bar{a}||a|))$. we define {this appears astechnical report # arxiv:0905.2386v4. a shorter version appears in the {proc.of mini-conference on applied theoretical computer science (matcos-10)},slovenia, oct. 13-14, 2010.} a new cobinatorial distance $d(a,b)$ $=$$\max\{\delta(a,b),\delta(b,a)\} $ which may be applied to measure the distancebetween binary strings of different lengths. the distance is based on aclassical combinatorial notion of information introduced by kolmogorov. | cs.dm cs.it math.it | nan | 2009-05-14T00:00:00 | 2010-10-17T00:00:00 | ['ratsaby'] | https://arxiv.org/abs/0905.2386 | 668 | 363 |
the quantum and classical complexity of translationally invariant tiling and hamiltonian problems | 0905.2419 | we study the complexity of a class of problems involving satisfyingconstraints which remain the same under translations in one or more spatialdirections. in this paper, we show hardness of a classical tiling problem on ann x n 2-dimensional grid and a quantum problem involving finding the groundstate energy of a 1-dimensional quantum system of n particles. in both cases,the only input is n, provided in binary. we show that the classical problem isnexp-complete and the quantum problem is qma_exp-complete. thus, an algorithmfor these problems which runs in time polynomial in n (exponential in the inputsize) would imply that exp = nexp or bqexp = qma_exp, respectively. althoughtiling in general is already known to be nexp-complete, to our knowledge, allprevious reductions require that either the set of tiles and their constraintsor some varying boundary conditions be given as part of the input. in theproblem considered here, these are fixed, constant-sized parameters of theproblem. instead, the problem instance is encoded solely in the size of thesystem. | quant-ph cs.cc | nan | 2009-05-14T00:00:00 | 2010-08-23T00:00:00 | ['gottesman', 'irani'] | https://arxiv.org/abs/0905.2419 | 1,067 | 364 |
on design and implementation of the distributed modular audio recognition framework: requirements and specification design document | 0905.2459 | we present the requirements and design specification of the open-sourcedistributed modular audio recognition framework (dmarf), a distributedextension of marf. the distributed version aggregates a number of distributedtechnologies (e.g. java rmi, corba, web services) in a pluggable and modularmodel along with the provision of advanced distributed systems algorithms. weoutline the associated challenges incurred during the design and implementationas well as overall specification of the project and its advantages andlimitations. | cs.cv cs.dc cs.mm cs.ne cs.sd | 10.1007/978-90-481-3662-9_72 | 2009-05-14T00:00:00 | 2009-07-26T00:00:00 | ['mokhov'] | https://arxiv.org/abs/0905.2459 | 532 | 365 |
point-set registration: coherent point drift | 0905.2635 | point set registration is a key component in many computer vision tasks. thegoal of point set registration is to assign correspondences between two sets ofpoints and to recover the transformation that maps one point set to the other.multiple factors, including an unknown non-rigid spatial transformation, largedimensionality of point set, noise and outliers, make the point setregistration a challenging problem. we introduce a probabilistic method, calledthe coherent point drift (cpd) algorithm, for both rigid and non-rigid pointset registration. we consider the alignment of two point sets as a probabilitydensity estimation problem. we fit the gmm centroids (representing the firstpoint set) to the data (the second point set) by maximizing the likelihood. weforce the gmm centroids to move coherently as a group to preserve thetopological structure of the point sets. in the rigid case, we impose thecoherence constraint by re-parametrization of gmm centroid locations with rigidparameters and derive a closed form solution of the maximization step of the emalgorithm in arbitrary dimensions. in the non-rigid case, we impose thecoherence constraint by regularizing the displacement field and using thevariational calculus to derive the optimal transformation. we also introduce afast algorithm that reduces the method computation complexity to linear. wetest the cpd algorithm for both rigid and non-rigid transformations in thepresence of noise, outliers and missing points, where cpd shows accurateresults and outperforms current state-of-the-art methods. | cs.cv | 10.1109/tpami.2010.46 | 2009-05-15T00:00:00 | null | ['myronenko', 'song'] | https://arxiv.org/abs/0905.2635 | 1,565 | 366 |
heterogeneous attachment strategies optimize the topology of dynamic wireless networks | 0905.2825 | in optimizing the topology of wireless networks built of a dynamic set ofspatially embedded agents, there are many trade-offs to be dealt with. thenetwork should preferably be as small (in the sense that the average, ormaximal, pathlength is short) as possible, it should be robust to failures, notconsume too much power, and so on. in this paper, we investigate simple modelsof how agents can choose their neighbors in such an environment. in our modelof attachment, we can tune from one situation where agents prefer to attach toothers in closest proximity, to a situation where distance is ignored (and thusattachments can be made to agents further away). we evaluate this scenario withseveral performance measures and find that the optimal topologies, for most ofthe quantities, is obtained for strategies resulting in a mix of most local anda few random connections. | cs.ni | 10.1140/epjb/e2010-00049-x | 2009-05-18T00:00:00 | null | ['kim', 'holme', 'fodor'] | https://arxiv.org/abs/0905.2825 | 871 | 367 |
a statistical learning approach to color demosaicing | 0905.2958 | a statistical learning/inference framework for color demosaicing ispresented. we start with simplistic assumptions about color constancy, andrecast color demosaicing as a blind linear inverse problem: color parameterizesthe unknown kernel, while brightness takes on the role of a latent variable. anexpectation-maximization algorithm naturally suggests itself for the estimationof them both. then, as we gradually broaden the family of hypothesis wherecolor is learned, we let our demosaicing behave adaptively, in a manner thatreflects our prior knowledge about the statistics of color images. we show thatwe can incorporate realistic, learned priors without essentially changing thecomplexity of the simple expectation-maximization algorithm we started with. | cs.cv | nan | 2009-05-18T00:00:00 | 2010-02-12T00:00:00 | ['oaknin'] | https://arxiv.org/abs/0905.2958 | 760 | 368 |
deficiency zero petri nets and product form | 0905.3158 | consider a markovian petri net with race policy. the marking process has a"product form" stationary distribution if the probability of viewing a givenmarking can be decomposed as the product over places of terms depending only onthe local marking. first we observe that the deficiency zero theorem offeinberg, developped for chemical reaction networks, provides a structural andsimple sufficient condition for the existence of a product form. in view ofthis, we study the classical subclass of free-choice nets. roughly, we showthat the only such petri nets having a product form are the state machineswhich can alternatively be viewed as jackson networks. | cs.dm | nan | 2009-05-19T00:00:00 | 2010-05-31T00:00:00 | ['mairesse', 'nguyen'] | https://arxiv.org/abs/0905.3158 | 656 | 369 |
coevolutionary genetic algorithms for establishing nash equilibrium in symmetric cournot games | 0905.3640 | we use co-evolutionary genetic algorithms to model the players' learningprocess in several cournot models, and evaluate them in terms of theirconvergence to the nash equilibrium. the "social-learning" versions of the twoco-evolutionary algorithms we introduce, establish nash equilibrium in thosemodels, in contrast to the "individual learning" versions which, as we seehere, do not imply the convergence of the players' strategies to the nashoutcome. when players use "canonical co-evolutionary genetic algorithms" aslearning algorithms, the process of the game is an ergodic markov chain, andtherefore we analyze simulation results using both the relevant methodology andmore general statistical tests, to find that in the "social" case, statesleading to ne play are highly frequent at the stationary distribution of thechain, in contrast to the "individual learning" case, when ne is not reached atall in our simulations; to find that the expected hamming distance of thestates at the limiting distribution from the "ne state" is significantlysmaller in the "social" than in the "individual learning case"; to estimate theexpected time that the "social" algorithms need to get to the "ne state" andverify their robustness and finally to show that a large fraction of the gamesplayed are indeed at the nash equilibrium. | cs.gt cs.lg | 10.1155/2010/573107 | 2009-05-22T00:00:00 | null | ['protopapas', 'kosmatopoulos', 'battaglia'] | https://arxiv.org/abs/0905.3640 | 1,321 | 370 |
a new solution to the relative orientation problem using only 3 points and the vertical direction | 0905.3964 | this paper presents a new method to recover the relative pose between twoimages, using three points and the vertical direction information. the verticaldirection can be determined in two ways: 1- using direct physical measurementlike imu (inertial measurement unit), 2- using vertical vanishing point. thisknowledge of the vertical direction solves 2 unknowns among the 3 parameters ofthe relative rotation, so that only 3 homologous points are requested toposition a couple of images. rewriting the coplanarity equations leads to asimpler solution. the remaining unknowns resolution is performed by analgebraic method using grobner bases. the elements necessary to build aspecific algebraic solver are given in this paper, allowing for a real-timeimplementation. the results on real and synthetic data show the efficiency ofthis method. | cs.cv | 10.1007/s10851-010-0234-2 | 2009-05-25T00:00:00 | null | ['kalantari', 'hashemi', 'jung', 'guedon'] | https://arxiv.org/abs/0905.3964 | 837 | 371 |
continued fraction expansion of real roots of polynomial systems | 0905.3993 | we present a new algorithm for isolating the real roots of a system ofmultivariate polynomials, given in the monomial basis. it is inspired byexisting subdivision methods in the bernstein basis; it can be seen asgeneralization of the univariate continued fraction algorithm or alternativelyas a fully analog of bernstein subdivision in the monomial basis. therepresentation of the subdivided domains is done through homographies, whichallows us to use only integer arithmetic and to treat efficiently unboundedregions. we use univariate bounding functions, projection and preconditionningtechniques to reduce the domain of search. the resulting boxes have optimizedrational coordinates, corresponding to the first terms of the continuedfraction expansion of the real roots. an extension of vincent's theorem tomultivariate polynomials is proved and used for the termination of thealgorithm. new complexity bounds are provided for a simplified version of thealgorithm. examples computed with a preliminary c++ implementation illustratethe approach. | cs.sc | 10.1145/1577190.1577207 | 2009-05-25T00:00:00 | 2010-11-11T00:00:00 | ['mantzaflaris', 'mourrain', 'tsigaridas'] | https://arxiv.org/abs/0905.3993 | 1,047 | 372 |
a 4/3-competitive randomized algorithm for online scheduling of packets with agreeable deadlines | 0905.4068 | in 2005 li et al. gave a phi-competitive deterministic online algorithm forscheduling of packets with agreeable deadlines with a very interestinganalysis. this is known to be optimal due to a lower bound by hajek. we claimthat the algorithm by li et al. can be slightly simplified, while retaining itscompetitive ratio. then we introduce randomness to the modified algorithm andargue that the competitive ratio against oblivious adversary is at most 4/3.note that this still leaves a gap between the best known lower bound of 5/4 bychin et al. for randomised algorithms against oblivious adversary. | cs.ds | nan | 2009-05-25T00:00:00 | 2010-02-03T00:00:00 | ['jeż'] | https://arxiv.org/abs/0905.4068 | 598 | 373 |
google matrix, dynamical attractors and ulam networks | 0905.4162 | we study the properties of the google matrix generated by a coarse-grainedperron-frobenius operator of the chirikov typical map with dissipation. thefinite size matrix approximant of this operator is constructed by the ulammethod. this method applied to the simple dynamical model creates the directedulam networks with approximate scale-free scaling and characteristics beingrather similar to those of the world wide web. the simple dynamical attractorsplay here the role of popular web sites with a strong concentration ofpagerank. a variation of the google parameter $\alpha$ or other parameters ofthe dynamical map can drive the pagerank of the google matrix to a delocalizedphase with a strange attractor where the google search becomes inefficient. | cs.ir | 10.1103/physreve.81.036213 | 2009-05-26T00:00:00 | 2009-08-20T00:00:00 | ['shepelyansky', 'zhirov'] | https://arxiv.org/abs/0905.4162 | 754 | 374 |
statistical properties of fluctuations: a method to check market behavior | 0905.4237 | we analyze the bombay stock exchange (bse) price index over the period oflast 12 years. keeping in mind the large fluctuations in last few years, wecarefully find out the transient, non-statistical and locally structuredvariations. for that purpose, we make use of daubechies wavelet andcharacterize the fractal behavior of the returns using a recently developedwavelet based fluctuation analysis method. the returns show a fat-taildistribution as also weak non-statistical behavior. we have also carried outcontinuous wavelet as well as fourier power spectral analysis to characterizethe periodic nature and correlation properties of the time series. | q-fin.st cs.ds physics.data-an | 10.1007/978-88-470-1501-2_13 | 2009-05-26T00:00:00 | null | ['panigrahi', 'ghosh', 'manimaran', 'ahalpara'] | https://arxiv.org/abs/0905.4237 | 651 | 375 |
turbo packet combining strategies for the mimo-isi arq channel | 0905.4541 | this paper addresses the issue of efficient turbo packet combining techniquesfor coded transmission with a chase-type automatic repeat request (arq)protocol operating over a multiple-input--multiple-output (mimo) channel withintersymbol interference (isi). first of all, we investigate the outageprobability and the outage-based power loss of the mimo-isi arq channel whenoptimal maximum a posteriori (map) turbo packet combining is used at thereceiver. we show that the arq delay (i.e., the maximum number of arq rounds)does not completely translate into a diversity gain. we then introduce twoefficient turbo packet combining algorithms that are inspired by minimum meansquare error (mmse)-based turbo equalization techniques. both schemes can beviewed as low-complexity versions of the optimal map turbo combiner. the firstscheme is called signal-level turbo combining and performs packet combining andmultiple transmission isi cancellation jointly at the signal-level. the secondscheme, called symbol-level turbo combining, allows arq rounds to be separatelyturbo equalized, while combining is performed at the filter output. we conducta complexity analysis where we demonstrate that both algorithms have almost thesame computational cost as the conventional log-likelihood ratio (llr)-levelcombiner. simulation results show that both proposed techniques outperformllr-level combining, while for some representative mimo configurations,signal-level combining has better isi cancellation capability and achievablediversity order than that of symbol-level combining. | cs.it math.it | 10.1109/tcomm.2009.12.080318 | 2009-05-27T00:00:00 | 2010-08-04T00:00:00 | ['ait-idir', 'saoudi'] | https://arxiv.org/abs/0905.4541 | 1,568 | 376 |
solving $k$-nearest neighbor problem on multiple graphics processors | 0906.0231 | the recommendation system is a software system to predict customers' unknownpreferences from known preferences. in the recommendation system, customers'preferences are encoded into vectors, and finding the nearest vectors to eachvector is an essential part. this vector-searching part of the problem iscalled a $k$-nearest neighbor problem. we give an effective algorithm to solvethis problem on multiple graphics processor units (gpus). our algorithm consists of two parts: an $n$-body problem and a partial sort.for a algorithm of the $n$-body problem, we applied the idea of a knownalgorithm for the $n$-body problem in physics, although another trick is needto overcome the problem of small sized shared memory. for the partial sort, wegive a novel gpu algorithm which is effective for small $k$. in our partialsort algorithm, a heap is accessed in parallel by threads with a low cost ofsynchronization. both of these two parts of our algorithm utilize maximal powerof coalesced memory access, so that a full bandwidth is achieved. by an experiment, we show that when the size of the problem is large, animplementation of the algorithm on two gpus runs more than 330 times fasterthan a single core implementation on a latest cpu. we also show that ouralgorithm scales well with respect to the number of gpus. | cs.ir cs.ds cs.ne | nan | 2009-06-01T00:00:00 | 2010-07-14T00:00:00 | ['kato', 'hosino'] | https://arxiv.org/abs/0906.0231 | 1,314 | 377 |
medium access control protocols with memory | 0906.0531 | many existing medium access control (mac) protocols utilize past information(e.g., the results of transmission attempts) to adjust the transmissionparameters of users. this paper provides a general framework to express andevaluate distributed mac protocols utilizing a finite length of memory for agiven form of feedback information. we define protocols with memory in thecontext of a slotted random access network with saturated arrivals. weintroduce two performance metrics, throughput and average delay, and formulatethe problem of finding an optimal protocol. we first show that a tdma outcome,which is the best outcome in the considered scenario, can be obtained after atransient period by a protocol with (n-1)-slot memory, where n is the totalnumber of users. next, we analyze the performance of protocols with 1-slotmemory using a markov chain and numerical methods. protocols with 1-slot memorycan achieve throughput arbitrarily close to 1 (i.e., 100% channel utilization)at the expense of large average delay, by correlating successful users in twoconsecutive slots. finally, we apply our framework to wireless local areanetworks. | cs.ni cs.it math.it | 10.1109/tnet.2010.2050699 | 2009-06-02T00:00:00 | 2010-01-07T00:00:00 | ['park', 'van der schaar'] | https://arxiv.org/abs/0906.0531 | 1,140 | 378 |
community detection in graphs | 0906.0612 | the modern science of networks has brought significant advances to ourunderstanding of complex systems. one of the most relevant features of graphsrepresenting real systems is community structure, or clustering, i. e. theorganization of vertices in clusters, with many edges joining vertices of thesame cluster and comparatively few edges joining vertices of differentclusters. such clusters, or communities, can be considered as fairlyindependent compartments of a graph, playing a similar role like, e. g., thetissues or the organs in the human body. detecting communities is of greatimportance in sociology, biology and computer science, disciplines wheresystems are often represented as graphs. this problem is very hard and not yetsatisfactorily solved, despite the huge effort of a large interdisciplinarycommunity of scientists working on it over the past few years. we will attempta thorough exposition of the topic, from the definition of the main elements ofthe problem, to the presentation of most methods developed, with a specialfocus on techniques designed by statistical physicists, from the discussion ofcrucial issues like the significance of clustering and how methods should betested and compared against each other, to the description of applications toreal networks. | physics.soc-ph cond-mat.stat-mech cs.ir physics.bio-ph physics.comp-ph q-bio.qm | 10.1016/j.physrep.2009.11.002 | 2009-06-03T00:00:00 | 2010-01-25T00:00:00 | ['fortunato'] | https://arxiv.org/abs/0906.0612 | 1,287 | 379 |
encoding models for scholarly literature | 0906.0675 | we examine the issue of digital formats for document encoding, archiving andpublishing, through the specific example of "born-digital" scholarly journalarticles. we will begin by looking at the traditional workflow of journalediting and publication, and how these practices have made the transition intothe online domain. we will examine the range of different file formats in whichelectronic articles are currently stored and published. we will argue stronglythat, despite the prevalence of binary and proprietary formats such as pdf andms word, xml is a far superior encoding choice for journal articles. next, welook at the range of xml document structures (dtds, schemas) which are incommon use for encoding journal articles, and consider some of their strengthsand weaknesses. we will suggest that, despite the existence of specializedschemas intended specifically for journal articles (such as nlm), and morebroadly-used publication-oriented schemas such as docbook, there are strongarguments in favour of developing a subset or customization of the textencoding initiative (tei) schema for the purpose of journal-article encoding;tei is already in use in a number of journal publication projects, and thescale and precision of the tei tagset makes it particularly appropriate forencoding scholarly articles. we will outline the document structure of atei-encoded journal article, and look in detail at suggested markup patternsfor specific features of journal articles. | cs.cl | 10.4018/978-1-60960-031-0 | 2009-06-03T00:00:00 | null | ['holmes', 'romary'] | https://arxiv.org/abs/0906.0675 | 1,476 | 380 |
computational complexity and numerical stability of linear problems | 0906.0687 | we survey classical and recent developments in numerical linear algebra,focusing on two issues: computational complexity, or arithmetic costs, andnumerical stability, or performance under roundoff error. we present a briefaccount of the algebraic complexity theory as well as the general erroranalysis for matrix multiplication and related problems. we emphasize thecentral role played by the matrix multiplication problem and discuss historicaland modern approaches to its solution. | cs.cc cs.ds cs.na math.ho math.na math.ra | 10.4171/077-1/16 | 2009-06-03T00:00:00 | 2009-09-11T00:00:00 | ['holtz', 'shomron'] | https://arxiv.org/abs/0906.0687 | 483 | 381 |
thinning, entropy and the law of thin numbers | 0906.0690 | renyi's "thinning" operation on a discrete random variable is a naturaldiscrete analog of the scaling operation for continuous random variables. theproperties of thinning are investigated in an information-theoretic context,especially in connection with information-theoretic inequalities related topoisson approximation results. the classical binomial-to-poisson convergence(sometimes referred to as the "law of small numbers" is seen to be a specialcase of a thinning limit theorem for convolutions of discrete distributions. arate of convergence is provided for this limit, and nonasymptotic bounds arealso established. this development parallels, in part, the development ofgaussian inequalities leading to the information-theoretic version of thecentral limit theorem. in particular, a "thinning markov chain" is introduced,and it is shown to play a role analogous to that of the ornstein-uhlenbeckprocess in connection to the entropy power inequality. | cs.it math.it math.pr | 10.1109/tit.2010.2053893 | 2009-06-03T00:00:00 | null | ['harremoes', 'johnson', 'kontoyiannis'] | https://arxiv.org/abs/0906.0690 | 957 | 382 |
a memetic algorithm for the multidimensional assignment problem | 0906.0862 | the multidimensional assignment problem (map or s-ap in the case of sdimensions) is an extension of the well-known assignment problem. the moststudied case of map is 3-ap, though the problems with larger values of s havealso a number of applications. in this paper we propose a memetic algorithm formap that is a combination of a genetic algorithm with a local search procedure.the main contribution of the paper is an idea of dynamically adjustedgeneration size, that yields an outstanding flexibility of the algorithm toperform well for both small and large fixed running times. the results ofcomputational experiments for several instance families show that the proposedalgorithm produces solutions of very high quality in a reasonable time andoutperforms the state-of-the art 3-ap memetic algorithm. | cs.ds | 10.1007/978-3-642-03751-1_12 | 2009-06-04T00:00:00 | null | ['gutin', 'karapetyan'] | https://arxiv.org/abs/0906.0862 | 803 | 383 |
coloring the square of the cartesian product of two cycles | 0906.1126 | the square $g^2$ of a graph $g$ is defined on the vertex set of $g$ in such away that distinct vertices with distance at most two in $g$ are joined by anedge. we study the chromatic number of the square of the cartesian product$c_m\box c_n$ of two cycles and show that the value of this parameter is atmost 7 except when $m=n=3$, in which case the value is 9, and when $m=n=4$ or$m=3$ and $n=5$, in which case the value is 8. moreover, we conjecture thatwhenever $g=c_m\box c_n$, the chromatic number of $g^2$ equals $\lceilmn/\alpha(g^2) \rceil$, where $\alpha(g^2)$ denotes the size of a maximalindependent set in $g^2$. | cs.dm | nan | 2009-06-05T00:00:00 | 2010-05-31T00:00:00 | ['sopena', 'wu'] | https://arxiv.org/abs/0906.1126 | 622 | 384 |
phoenix cloud: consolidating different computing loads on shared cluster system for large organization | 0906.1346 | different departments of a large organization often run dedicated clustersystems for different computing loads, like hpc (high performance computing)jobs or web service applications. in this paper, we have designed andimplemented a cloud management system software phoenix cloud to consolidateheterogeneous workloads from different departments affiliated to the sameorganization on the shared cluster system. we have also proposed cooperativeresource provisioning and management policies for a large organization and itsaffiliated departments, running hpc jobs and web service applications, to sharethe consolidated cluster system. the experiments show that in comparison withthe case that each department operates its dedicated cluster system, phoenixcloud significantly decreases the scale of the required cluster system for alarge organization, improves the benefit of the scientific computingdepartment, and at the same time provisions enough resources to the otherdepartment running web services with varying loads. | cs.dc | nan | 2009-06-07T00:00:00 | 2010-07-16T00:00:00 | ['zhan', 'wang', 'tu', 'li', 'wang', 'zhou', 'meng'] | https://arxiv.org/abs/0906.1346 | 1,020 | 385 |
fixed-parameter algorithms in analysis of heuristics for extracting networks in linear programs | 0906.1359 | we consider the problem of extracting a maximum-size reflected network in alinear program. this problem has been studied before and a state-of-the-art sgaheuristic with two variations have been proposed. in this paper we apply a new approach to evaluate the quality of sga\@. inparticular, we solve majority of the instances in the testbed to optimalityusing a new fixed-parameter algorithm, i.e., an algorithm whose runtime ispolynomial in the input size but exponential in terms of an additionalparameter associated with the given problem. this analysis allows us to conclude that the the existing sga heuristic, infact, produces solutions of a very high quality and often reaches the optimalobjective values. however, sga contain two components which leave some spacefor improvement: building of a spanning tree and searching for an independentset in a graph. in the hope of obtaining even better heuristic, we tried toreplace both of these components with some equivalent algorithms. we tried to use a fixed-parameter algorithm instead of a greedy one forsearching of an independent set. but even the exact solution of this subproblemimproved the whole heuristic insignificantly. hence, the crucial part of sga isbuilding of a spanning tree. we tried three different algorithms, and itappears that the depth-first search is clearly superior to the other ones inbuilding of the spanning tree for sga. thereby, by application of fixed-parameter algorithms, we managed to checkthat the existing sga heuristic is of a high quality and selected the componentwhich required an improvement. this allowed us to intensify the research in aproper direction which yielded a superior variation of sga. | cs.ds cs.se | 10.1007/978-3-642-11269-0 | 2009-06-07T00:00:00 | 2009-10-31T00:00:00 | ['gutin', 'karapetyan', 'razgon'] | https://arxiv.org/abs/0906.1359 | 1,697 | 386 |
on the effectiveness of a binless entropy estimator for generalised entropic forms | 0906.1360 | in this manuscript we discuss the effectiveness of the kozachenko-leonenkoentropy estimator when generalised to cope with entropic forms customarilyapplied to study systems evincing asymptotic scale invariance and dependence(either linear or non-linear type). we show that when the variables areindependently and identically distributed the estimator is only valuable alongthe whole domain if the data follow the uniform distribution, whereas for otherdistributions the estimator is only effectual in the limit of theboltzmann-gibbs-shanon entropic form. we also analyse the influence of thedependence (linear and non-linear) between variables on the accuracy of theestimator between variables. as expected in the last case the estimator loosesefficiency for the boltzmann-gibbs-shanon entropic form as well. | cs.it cs.ds math.it math.na | 10.1103/physreve.80.062101 | 2009-06-07T00:00:00 | 2010-01-03T00:00:00 | ['queiros'] | https://arxiv.org/abs/0906.1360 | 808 | 387 |
on quantum-classical equivalence for composed communication problems | 0906.1399 | an open problem in communication complexity proposed by several authors is toprove that for every boolean function f, the task of computing f(x and y) haspolynomially related classical and quantum bounded-error complexities. we solvea variant of this question. for every f, we prove that the task of computing,on input x and y, both of the quantities f(x and y) and f(x or y) haspolynomially related classical and quantum bounded-error complexities. wefurther show that the quantum bounded-error complexity is polynomially relatedto the classical deterministic complexity and the block sensitivity of f. thisresult holds regardless of prior entanglement. | cs.cc quant-ph | nan | 2009-06-07T00:00:00 | 2010-02-03T00:00:00 | ['sherstov'] | https://arxiv.org/abs/0906.1399 | 654 | 388 |
classical predicative logic-enriched type theories | 0906.1726 | a logic-enriched type theory (ltt) is a type theory extended with a primitivemechanism for forming and proving propositions. we construct two ltts, namedltto and ltto*, which we claim correspond closely to the classical predicativesystems of second order arithmetic acao and aca. we justify this claim bytranslating each second-order system into the corresponding ltt, and provingthat these translations are conservative. this is part of an ongoing researchproject to investigate how ltts may be used to formalise different approachesto the foundations of mathematics. the two ltts we construct are subsystems of the logic-enriched type theorylttw, which is intended to formalise the classical predicative foundationpresented by herman weyl in his monograph das kontinuum. the system acao hasalso been claimed to correspond to weyl's foundation. by casting acao and acaas ltts, we are able to compare them with lttw. it is a consequence of the workin this paper that lttw is strictly stronger than acao. the conservativity proof makes use of a novel technique for proving one lttconservative over another, involving defining an interpretation of the strongersystem out of the expressions of the weaker. this technique should beapplicable in a wide variety of different cases outside the present work. | cs.lo math.lo | 10.1016/j.apal.2010.04.005 | 2009-06-09T00:00:00 | 2010-08-18T00:00:00 | ['adams', 'luo'] | https://arxiv.org/abs/0906.1726 | 1,302 | 389 |
the emergence of rational behavior in the presence of stochastic perturbations | 0906.2094 | we study repeated games where players use an exponential learning scheme inorder to adapt to an ever-changing environment. if the game's payoffs aresubject to random perturbations, this scheme leads to a new stochastic versionof the replicator dynamics that is quite different from the "aggregate shocks"approach of evolutionary game theory. irrespective of the perturbations'magnitude, we find that strategies which are dominated (even iteratively)eventually become extinct and that the game's strict nash equilibria arestochastically asymptotically stable. we complement our analysis byillustrating these results in the case of congestion games. | math.pr cs.gt | 10.1214/09-aap651 | 2009-06-11T00:00:00 | 2010-10-21T00:00:00 | ['mertikopoulos', 'moustakas'] | https://arxiv.org/abs/0906.2094 | 647 | 390 |
properties of quasi-alphabetic tree bimorphisms | 0906.2369 | we study the class of quasi-alphabetic relations, i.e., tree transformationsdefined by tree bimorphisms with two quasi-alphabetic tree homomorphisms and aregular tree language. we present a canonical representation of theserelations; as an immediate consequence, we get the closure under union. also,we show that they are not closed under intersection and complement, and do notpreserve most common operations on trees (branches, subtrees, v-product,v-quotient, f-top-catenation). moreover, we prove that the translations definedby quasi-alphabetic tree bimorphism are exactly products of context-free stringlanguages. we conclude by presenting the connections between quasi-alphabeticrelations, alphabetic relations and classes of tree transformations defined byseveral types of top-down tree transducers. furthermore, we get thatquasi-alphabetic relations preserve the recognizable and algebraic treelanguages. | cs.cl cs.fl | nan | 2009-06-12T00:00:00 | null | ['maletti', 'tirnauca'] | https://arxiv.org/abs/0906.2369 | 912 | 391 |
from artifacts to aggregations: modeling scientific life cycles on the semantic web | 0906.2549 | in the process of scientific research, many information objects aregenerated, all of which may remain valuable indefinitely. however, artifactssuch as instrument data and associated calibration information may have littlevalue in isolation; their meaning is derived from their relationships to eachother. individual artifacts are best represented as components of a life cyclethat is specific to a scientific research domain or project. current catalogingpractices do not describe objects at a sufficient level of granularity nor dothey offer the globally persistent identifiers necessary to discover and managescholarly products with world wide web standards. the open archivesinitiative's object reuse and exchange data model (oai-ore) meets theserequirements. we demonstrate a conceptual implementation of oai-ore torepresent the scientific life cycles of embedded networked sensor applicationsin seismology and environmental sciences. by establishing relationships betweenpublications, data, and contextual research information, we illustrate how toobtain a richer and more realistic view of scientific practices. that view canfacilitate new forms of scientific research and learning. our analysis isframed by studies of scientific practices in a large, multi-disciplinary,multi-university science and engineering research center, the center forembedded networked sensing (cens). | cs.dl cs.cy | 10.1002/asi.21263 | 2009-06-14T00:00:00 | 2009-10-20T00:00:00 | ['pepe', 'mayernik', 'borgman', 'van de sompel'] | https://arxiv.org/abs/0906.2549 | 1,383 | 392 |
bayesian history reconstruction of complex human gene clusters on a phylogeny | 0906.2635 | clusters of genes that have evolved by repeated segmental duplication presentdifficult challenges throughout genomic analysis, from sequence assembly tofunctional analysis. improved understanding of these clusters is of utmostimportance, since they have been shown to be the source of evolutionaryinnovation, and have been linked to multiple diseases, including hiv and avariety of cancers. previously, zhang et al. (2008) developed an algorithm forreconstructing parsimonious evolutionary histories of such gene clusters, usingonly human genomic sequence data. in this paper, we propose a probabilisticmodel for the evolution of gene clusters on a phylogeny, and an mcmc algorithmfor reconstruction of duplication histories from genomic sequences in multiplespecies. several projects are underway to obtain high quality bac-basedassemblies of duplicated clusters in multiple species, and we anticipate thatour method will be useful in analyzing these valuable new data sets. | cs.lg | 10.1007/978-3-642-04744-2_13 | 2009-06-15T00:00:00 | null | ['vinař', 'brejová', 'song', 'siepel'] | https://arxiv.org/abs/0906.2635 | 975 | 393 |
norms and commitment for iorgs(tm) information systems: direct logic(tm) and participatory grounding checking | 0906.2756 | the fundamental assumption of the event calculus is overly simplistic when itcomes to organizations in which time-varying properties have to be activelymaintained and managed in order to continue to hold and termination by anotheraction is not required for a property to no longer hold. i.e., if activemeasures are not taken then things will go haywire by default. similarlyextension and revision is required for grounding checking properties of systemsbased on a set of ground inferences. previously model checking as beenperformed using the model of nondeterministic automata based on statesdetermined by time-points. these nondeterministic automata are not suitable foriorgs, which are highly structured and operate asynchronously with only looselybounded nondeterminism. iorgs information systems have been developed as atechnology in which organizations have people that are tightly integrated withinformation technology that enables them to function organizationally. iorgsformalize existing practices to provide a framework for addressing issues ofauthority, accountability, scalability, and robustness using methods that areanalogous to human organizations. in general -iorgs are a natural extension webservices, which are the standard for distributed computing and softwareapplication interoperability in large-scale organizational computing. -iorgsare structured by organizational commitment that is a special case of physicalcommitment that is defined to be information pledged. iorgs norms are used toillustrate the following: -even a very simple microtheory for normativereasoning can engender inconsistency in practice, it is impossible to verifythe consistency of a theory for a practical domain. -improved safety inreasoning. it is not safe to use classical logic and probability theory inpractical reasoning. | cs.ma cs.lo cs.se | nan | 2009-06-15T00:00:00 | 2010-11-06T00:00:00 | ['hewitt'] | https://arxiv.org/abs/0906.2756 | 1,825 | 394 |
the jewett-krieger construction for tilings | 0906.2997 | given a random distribution of impurities on a periodic crystal, anequivalent uniquely ergodic tiling space is built, made of aperiodic,repetitive tilings with finite local complexity, and with configurationalentropy close to the entropy of the impurity distribution. the construction isthe tiling analog of the jewett-kreger theorem. | math.ds cs.it math.it math.pr | nan | 2009-06-16T00:00:00 | 2010-11-03T00:00:00 | ['palmer', 'bellissard'] | https://arxiv.org/abs/0906.2997 | 334 | 395 |
a systematic method for constructing time discretizations of integrable lattice systems: local equations of motion | 0906.3155 | we propose a new method for discretizing the time variable in integrablelattice systems while maintaining the locality of the equations of motion. themethod is based on the zero-curvature (lax pair) representation and thelowest-order "conservation laws". in contrast to the pioneering work ofablowitz and ladik, our method allows the auxiliary dependent variablesappearing in the stage of time discretization to be expressed locally in termsof the original dependent variables. the time-discretized lattice systems havethe same set of conserved quantities and the same structures of the solutionsas the continuous-time lattice systems; only the time evolution of theparameters in the solutions that correspond to the angle variables isdiscretized. the effectiveness of our method is illustrated using examples suchas the toda lattice, the volterra lattice, the modified volterra lattice, theablowitz-ladik lattice (an integrable semi-discrete nonlinear schroedingersystem), and the lattice heisenberg ferromagnet model. for the volterra latticeand modified volterra lattice, we also present their ultradiscrete analogues. | nlin.si cs.na math-ph math.mp | 10.1088/1751-8113/43/41/415202 | 2009-06-17T00:00:00 | 2010-09-28T00:00:00 | ['tsuchida'] | https://arxiv.org/abs/0906.3155 | 1,121 | 396 |
approximate characterizations for the gaussian source broadcast distortion region | 0906.3183 | we consider the joint source-channel coding problem of sending a gaussiansource on a k-user gaussian broadcast channel with bandwidth mismatch. a newouter bound to the achievable distortion region is derived using the techniqueof introducing more than one additional auxiliary random variable, which waspreviously used to derive sum-rate lower bound for the symmetric gaussianmultiple description problem. by combining this outer bound with theachievability result based on source-channel separation, we provide approximatecharacterizations of the achievable distortion region within constantmultiplicative factors. furthermore, we show that the results can be extendedto general broadcast channels, and the performance of the source-channelseparation based approach is also within the same constant multiplicativefactors of the optimum. | cs.it math.it | nan | 2009-06-17T00:00:00 | 2010-11-19T00:00:00 | ['tian', 'diggavi', 'shamai'] | https://arxiv.org/abs/0906.3183 | 837 | 397 |
efficient and portable sdr waveform development: the nucleus concept | 0906.3313 | future wireless communication systems should be flexible to support differentwaveforms (wfs) and be cognitive to sense the environment and tune themselves.this has lead to tremendous interest in software defined radios (sdrs).constraints like throughput, latency and low energy demand high implementationefficiency. the tradeoff of going for a highly efficient implementation is theincrease of porting effort to a new hardware (hw) platform. in this paper, wepropose a novel concept for wf development, the nucleus concept, that exploitsthe common structure in various wireless signal processing algorithms andprovides a way for efficient and portable implementation. tool assisted wfmapping and exploration is done efficiently by propagating the implementationand interface properties of nuclei. the nucleus concept aims at providingsoftware flexibility with high level programmability, but at the same timelimiting hw flexibility to maximize area and energy efficiency. | cs.it cs.ni math.it | 10.1109/milcom.2009.5379897 | 2009-06-18T00:00:00 | 2009-07-23T00:00:00 | ['ramakrishnan', 'witte', 'kempf', 'kammler', 'ascheid', 'meyr', 'adrat', 'antweiler'] | https://arxiv.org/abs/0906.3313 | 971 | 398 |
finding significant subregions in large image databases | 0906.3585 | images have become an important data source in many scientific and commercialdomains. analysis and exploration of image collections often requires theretrieval of the best subregions matching a given query. the support of suchcontent-based retrieval requires not only the formulation of an appropriatescoring function for defining relevant subregions but also the design of newaccess methods that can scale to large databases. in this paper, we propose asolution to this problem of querying significant image subregions. we design ascoring scheme to measure the similarity of subregions. our similarity measureextends to any image descriptor. all the images are tiled and each alignment ofthe query and a database image produces a tile score matrix. we show that theproblem of finding the best connected subregion from this matrix is np-hard anddevelop a dynamic programming heuristic. with this heuristic, we develop twoindex based scalable search strategies, tars and spars, to query patterns in alarge image repository. these strategies are general enough to work with otherscoring schemes and heuristics. experimental results on real image datasetsshow that tars saves more than 87% query time on small queries, and spars savesup to 52% query time on large queries as compared to linear search. qualitativetests on synthetic and real datasets achieve precision of more than 80%. | cs.db cs.cv cs.ir | nan | 2009-06-19T00:00:00 | null | ['singh', 'bhattacharya', 'singh'] | https://arxiv.org/abs/0906.3585 | 1,382 | 399 |