corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-501
0706.1019
Probabilistic Anonymity and Admissible Schedulers
<|reference_start|>Probabilistic Anonymity and Admissible Schedulers: When studying safety properties of (formal) protocol models, it is customary to view the scheduler as an adversary: an entity trying to falsify the safety property. We show that in the context of security protocols, and in particular of anonymizing protocols, this gives the adversary too much power; for instance, the contents of encrypted messages and internal computations by the parties should be considered invisible to the adversary. We restrict the class of schedulers to a class of admissible schedulers which better model adversarial behaviour. These admissible schedulers base their decision solely on the past behaviour of the system that is visible to the adversary. Using this, we propose a definition of anonymity: for all admissible schedulers the identity of the users and the observations of the adversary are independent stochastic variables. We also develop a proof technique for typical cases that can be used to proof anonymity: a system is anonymous if it is possible to `exchange' the behaviour of two users without the adversary `noticing'.<|reference_end|>
arxiv
@article{garcia2007probabilistic, title={Probabilistic Anonymity and Admissible Schedulers}, author={Flavio D. Garcia, Peter van Rossum, and Ana Sokolova}, journal={arXiv preprint arXiv:0706.1019}, year={2007}, archivePrefix={arXiv}, eprint={0706.1019}, primaryClass={cs.CR} }
garcia2007probabilistic
arxiv-502
0706.1051
Improved Neural Modeling of Real-World Systems Using Genetic Algorithm Based Variable Selection
<|reference_start|>Improved Neural Modeling of Real-World Systems Using Genetic Algorithm Based Variable Selection: Neural network models of real-world systems, such as industrial processes, made from sensor data must often rely on incomplete data. System states may not all be known, sensor data may be biased or noisy, and it is not often known which sensor data may be useful for predictive modelling. Genetic algorithms may be used to help to address this problem by determining the near optimal subset of sensor variables most appropriate to produce good models. This paper describes the use of genetic search to optimize variable selection to determine inputs into the neural network model. We discuss genetic algorithm implementation issues including data representation types and genetic operators such as crossover and mutation. We present the use of this technique for neural network modelling of a typical industrial application, a liquid fed ceramic melter, and detail the results of the genetic search to optimize the neural network model for this application.<|reference_end|>
arxiv
@article{sofge2007improved, title={Improved Neural Modeling of Real-World Systems Using Genetic Algorithm Based Variable Selection}, author={Donald A. Sofge and David L. Elliott}, journal={D. Sofge and D. Elliott, "Improved Neural Modeling of Real-World Systems Using Genetic Algorithm Based Variable Selection," In Int'l Conf. on Neural Networks and Brain (ICNN&B'98-Beijing), 1998}, year={2007}, archivePrefix={arXiv}, eprint={0706.1051}, primaryClass={cs.NE} }
sofge2007improved
arxiv-503
0706.1061
Design, Implementation, and Cooperative Coevolution of an Autonomous/ Teleoperated Control System for a Serpentine Robotic Manipulator
<|reference_start|>Design, Implementation, and Cooperative Coevolution of an Autonomous/ Teleoperated Control System for a Serpentine Robotic Manipulator: Design, implementation, and machine learning issues associated with developing a control system for a serpentine robotic manipulator are explored. The controller developed provides autonomous control of the serpentine robotic manipulatorduring operation of the manipulator within an enclosed environment such as an underground storage tank. The controller algorithms make use of both low-level joint angle control employing force/position feedback constraints, and high-level coordinated control of end-effector positioning. This approach has resulted in both high-level full robotic control and low-level telerobotic control modes, and provides a high level of dexterity for the operator.<|reference_end|>
arxiv
@article{sofge2007design,, title={Design, Implementation, and Cooperative Coevolution of an Autonomous/ Teleoperated Control System for a Serpentine Robotic Manipulator}, author={Donald Sofge and Gerald Chiang}, journal={D. Sofge and G. Chiang, "Design, ... a Serpentine Automated Waste Retrieval Manipulator," Amer. Nucl. Soc. 9th Top. Meeting on Robotics and Remote Systems, 2001}, year={2007}, archivePrefix={arXiv}, eprint={0706.1061}, primaryClass={cs.NE cs.RO} }
sofge2007design,
arxiv-504
0706.1063
Small Worlds: Strong Clustering in Wireless Networks
<|reference_start|>Small Worlds: Strong Clustering in Wireless Networks: Small-worlds represent efficient communication networks that obey two distinguishing characteristics: a high clustering coefficient together with a small characteristic path length. This paper focuses on an interesting paradox, that removing links in a network can increase the overall clustering coefficient. Reckful Roaming, as introduced in this paper, is a 2-localized algorithm that takes advantage of this paradox in order to selectively remove superfluous links, this way optimizing the clustering coefficient while still retaining a sufficiently small characteristic path length.<|reference_end|>
arxiv
@article{brust2007small, title={Small Worlds: Strong Clustering in Wireless Networks}, author={Matthias R. Brust, Steffen Rothkugel}, journal={arXiv preprint arXiv:0706.1063}, year={2007}, archivePrefix={arXiv}, eprint={0706.1063}, primaryClass={cs.NI cs.DC cs.DS} }
brust2007small
arxiv-505
0706.1066
Applying Test-Paradigms in a Generic Tutoring System Concept for Web-based Learning
<|reference_start|>Applying Test-Paradigms in a Generic Tutoring System Concept for Web-based Learning: Realizing test scenarios through a tutoring system involve questions about architecture and didactic methods in such a system. Observing the fact that traditional tutoring systems normally are domain-static, this paper shows investigations for a generic domain-independent tutoring system for utilizing test scenarios in computer-based and web-based environments. Furthermore, test paradigms are analyzed and it is presented an approach for realizing functionality for applying test paradigms in the presented generic tutoring system architecture by an XML-specified language.<|reference_end|>
arxiv
@article{brust2007applying, title={Applying Test-Paradigms in a Generic Tutoring System Concept for Web-based Learning}, author={Matthias R. Brust}, journal={arXiv preprint arXiv:0706.1066}, year={2007}, archivePrefix={arXiv}, eprint={0706.1066}, primaryClass={cs.CY} }
brust2007applying
arxiv-506
0706.1080
WACA: A Hierarchical Weighted Clustering Algorithm optimized for Mobile Hybrid Networks
<|reference_start|>WACA: A Hierarchical Weighted Clustering Algorithm optimized for Mobile Hybrid Networks: Clustering techniques create hierarchal network structures, called clusters, on an otherwise flat network. In a dynamic environment-in terms of node mobility as well as in terms of steadily changing device parameters-the clusterhead election process has to be re-invoked according to a suitable update policy. Cluster re-organization causes additional message exchanges and computational complexity and it execution has to be optimized. Our investigations focus on the problem of minimizing clusterhead re-elections by considering stability criteria. These criteria are based on topological characteristics as well as on device parameters. This paper presents a weighted clustering algorithm optimized to avoid needless clusterhead re-elections for stable clusters in mobile ad-hoc networks. The proposed localized algorithm deals with mobility, but does not require geographical, speed or distances information.<|reference_end|>
arxiv
@article{brust2007waca:, title={WACA: A Hierarchical Weighted Clustering Algorithm optimized for Mobile Hybrid Networks}, author={Matthias R. Brust, Adrian Andronache, Steffen Rothkugel}, journal={arXiv preprint arXiv:0706.1080}, year={2007}, archivePrefix={arXiv}, eprint={0706.1080}, primaryClass={cs.DC cs.NI} }
brust2007waca:
arxiv-507
0706.1084
Sublinear Algorithms for Approximating String Compressibility
<|reference_start|>Sublinear Algorithms for Approximating String Compressibility: We raise the question of approximating the compressibility of a string with respect to a fixed compression scheme, in sublinear time. We study this question in detail for two popular lossless compression schemes: run-length encoding (RLE) and Lempel-Ziv (LZ), and present sublinear algorithms for approximating compressibility with respect to both schemes. We also give several lower bounds that show that our algorithms for both schemes cannot be improved significantly. Our investigation of LZ yields results whose interest goes beyond the initial questions we set out to study. In particular, we prove combinatorial structural lemmas that relate the compressibility of a string with respect to Lempel-Ziv to the number of distinct short substrings contained in it. In addition, we show that approximating the compressibility with respect to LZ is related to approximating the support size of a distribution.<|reference_end|>
arxiv
@article{raskhodnikova2007sublinear, title={Sublinear Algorithms for Approximating String Compressibility}, author={Sofya Raskhodnikova and Dana Ron and Ronitt Rubinfeld and Adam Smith}, journal={arXiv preprint arXiv:0706.1084}, year={2007}, archivePrefix={arXiv}, eprint={0706.1084}, primaryClass={cs.DS} }
raskhodnikova2007sublinear
arxiv-508
0706.1087
On Anomalies in Annotation Systems
<|reference_start|>On Anomalies in Annotation Systems: Today's computer-based annotation systems implement a wide range of functionalities that often go beyond those available in traditional paper-and-pencil annotations. Conceptually, annotation systems are based on thoroughly investigated psycho-sociological and pedagogical learning theories. They offer a huge diversity of annotation types that can be placed in textual as well as in multimedia format. Additionally, annotations can be published or shared with a group of interested parties via well-organized repositories. Although highly sophisticated annotation systems exist both conceptually as well as technologically, we still observe that their acceptance is somewhat limited. In this paper, we argue that nowadays annotation systems suffer from several fundamental problems that are inherent in the traditional paper-and-pencil annotation paradigm. As a solution, we propose to shift the annotation paradigm for the implementation of annotation system.<|reference_end|>
arxiv
@article{brust2007on, title={On Anomalies in Annotation Systems}, author={Matthias R. Brust, Steffen Rothkugel}, journal={arXiv preprint arXiv:0706.1087}, year={2007}, archivePrefix={arXiv}, eprint={0706.1087}, primaryClass={cs.HC cs.CY} }
brust2007on
arxiv-509
0706.1096
Inquiring the Potential of Evoking Small-World Properties for Self-Organizing Communication Networks
<|reference_start|>Inquiring the Potential of Evoking Small-World Properties for Self-Organizing Communication Networks: Mobile multi-hop ad hoc networks allow establishing local groups of communicating devices in a self-organizing way. However, in a global setting such networks fail to work properly due to network partitioning. Providing that devices are capable of communicating both locally-e.g. using Wi-Fi or Bluetooth-and additionally also with arbitrary remote devices-e.g. using GSM/UMTS links-the objective is to find efficient ways of inter-linking multiple network partitions. Tackling this problem of topology control, we focus on the class of small-world networks that obey two distinguishing characteristics: they have a strong local clustering while still retaining a small average distance between two nodes. This paper reports on results gained investigating the question if small-world properties are indicative for an efficient link management in multiple multi-hop ad hoc network partitions.<|reference_end|>
arxiv
@article{brust2007inquiring, title={Inquiring the Potential of Evoking Small-World Properties for Self-Organizing Communication Networks}, author={Matthias R. Brust, Steffen Rothkugel, Carlos H.C. Ribeiro}, journal={Published in: Proceedings of 5th International Conference on Networking (ICN 06), 2006, IEEE Computer Society Press}, year={2007}, doi={10.1109/ICNICONSMCL.2006.124}, archivePrefix={arXiv}, eprint={0706.1096}, primaryClass={cs.NI} }
brust2007inquiring
arxiv-510
0706.1118
Asynchronous games: innocence without alternation
<|reference_start|>Asynchronous games: innocence without alternation: The notion of innocent strategy was introduced by Hyland and Ong in order to capture the interactive behaviour of lambda-terms and PCF programs. An innocent strategy is defined as an alternating strategy with partial memory, in which the strategy plays according to its view. Extending the definition to non-alternating strategies is problematic, because the traditional definition of views is based on the hypothesis that Opponent and Proponent alternate during the interaction. Here, we take advantage of the diagrammatic reformulation of alternating innocence in asynchronous games, in order to provide a tentative definition of innocence in non-alternating games. The task is interesting, and far from easy. It requires the combination of true concurrency and game semantics in a clean and organic way, clarifying the relationship between asynchronous games and concurrent games in the sense of Abramsky and Melli\`es. It also requires an interactive reformulation of the usual acyclicity criterion of linear logic, as well as a directed variant, as a scheduling criterion.<|reference_end|>
arxiv
@article{melliès2007asynchronous, title={Asynchronous games: innocence without alternation}, author={Paul-Andr'e Melli`es (PPS), Samuel Mimram (PPS)}, journal={arXiv preprint arXiv:0706.1118}, year={2007}, archivePrefix={arXiv}, eprint={0706.1118}, primaryClass={cs.LO} }
melliès2007asynchronous
arxiv-511
0706.1119
Cointegration of the Daily Electric Power System Load and the Weather
<|reference_start|>Cointegration of the Daily Electric Power System Load and the Weather: The paper makes a thermal predictive analysis of the electric power system security for a day ahead. This predictive analysis is set as a thermal computation of the expected security. This computation is obtained by cointegrating the daily electric power systen load and the weather, by finding the daily electric power system thermodynamics and by introducing tests for this thermodynamics. The predictive analysis made shows the electricity consumers' wisdom.<|reference_end|>
arxiv
@article{stefanov2007cointegration, title={Cointegration of the Daily Electric Power System Load and the Weather}, author={Stefan Z. Stefanov}, journal={arXiv preprint arXiv:0706.1119}, year={2007}, archivePrefix={arXiv}, eprint={0706.1119}, primaryClass={cs.CE} }
stefanov2007cointegration
arxiv-512
0706.1127
Redesigning Computer-based Learning Environments: Evaluation as Communication
<|reference_start|>Redesigning Computer-based Learning Environments: Evaluation as Communication: In the field of evaluation research, computer scientists live constantly upon dilemmas and conflicting theories. As evaluation is differently perceived and modeled among educational areas, it is not difficult to become trapped in dilemmas, which reflects an epistemological weakness. Additionally, designing and developing a computer-based learning scenario is not an easy task. Advancing further, with end-users probing the system in realistic settings, is even harder. Computer science research in evaluation faces an immense challenge, having to cope with contributions from several conflicting and controversial research fields. We believe that deep changes must be made in our field if we are to advance beyond the CBT (computer-based training) learning model and to build an adequate epistemology for this challenge. The first task is to relocate our field by building upon recent results from philosophy, psychology, social sciences, and engineering. In this article we locate evaluation in respect to communication studies. Evaluation presupposes a definition of goals to be reached, and we suggest that it is, by many means, a silent communication between teacher and student, peers, and institutional entities. If we accept that evaluation can be viewed as set of invisible rules known by nobody, but somehow understood by everybody, we should add anthropological inquiries to our research toolkit. The paper is organized around some elements of the social communication and how they convey new insights to evaluation research for computer and related scientists. We found some technical limitations and offer discussions on how we relate to technology at same time we establish expectancies and perceive others work.<|reference_end|>
arxiv
@article{brust2007redesigning, title={Redesigning Computer-based Learning Environments: Evaluation as Communication}, author={Matthias R. Brust, Christian M. Adriano, Ivan M.L. Ricarte}, journal={arXiv preprint arXiv:0706.1127}, year={2007}, archivePrefix={arXiv}, eprint={0706.1127}, primaryClass={cs.CY cs.HC} }
brust2007redesigning
arxiv-513
0706.1130
A Communication Model for Adaptive Service Provisioning in Hybrid Wireless Networks
<|reference_start|>A Communication Model for Adaptive Service Provisioning in Hybrid Wireless Networks: Mobile entities with wireless links are able to form a mobile ad-hoc network. Such an infrastructureless network does not have to be administrated. However, self-organizing principles have to be applied to deal with upcoming problems, e.g. information dissemination. These kinds of problems are not easy to tackle, requiring complex algorithms. Moreover, the usefulness of pure ad-hoc networks is arguably limited. Hence, enthusiasm for mobile ad-hoc networks, which could eliminate the need for any fixed infrastructure, has been damped. The goal is to overcome the limitations of pure ad-hoc networks by augmenting them with instant Internet access, e.g. via integration of UMTS respectively GSM links. However, this raises multiple questions at the technical as well as the organizational level. Motivated by characteristics of small-world networks that describe an efficient network even without central or organized design, this paper proposes to combine mobile ad-hoc networks and infrastructured networks to form hybrid wireless networks. One main objective is to investigate how this approach can reduce the costs of a permanent backbone link and providing in the same way the benefits of useful information from Internet connectivity or service providers. For the purpose of bridging between the different types of networks, an adequate middleware service is the focus of our investigation. This paper shows our first steps forward to this middleware by introducing the Injection Communication paradigm as principal concept.<|reference_end|>
arxiv
@article{brust2007a, title={A Communication Model for Adaptive Service Provisioning in Hybrid Wireless Networks}, author={Matthias R. Brust, Steffen Rothkugel}, journal={arXiv preprint arXiv:0706.1130}, year={2007}, archivePrefix={arXiv}, eprint={0706.1130}, primaryClass={cs.NI cs.AR cs.CY cs.HC} }
brust2007a
arxiv-514
0706.1137
Automatically Restructuring Practice Guidelines using the GEM DTD
<|reference_start|>Automatically Restructuring Practice Guidelines using the GEM DTD: This paper describes a system capable of semi-automatically filling an XML template from free texts in the clinical domain (practice guidelines). The XML template includes semantic information not explicitly encoded in the text (pairs of conditions and actions/recommendations). Therefore, there is a need to compute the exact scope of conditions over text sequences expressing the required actions. We present a system developed for this task. We show that it yields good performance when applied to the analysis of French practice guidelines.<|reference_end|>
arxiv
@article{bouffier2007automatically, title={Automatically Restructuring Practice Guidelines using the GEM DTD}, author={Amanda Bouffier (LIPN), Thierry Poibeau (LIPN)}, journal={Proceedings of Biomedical Natural Language Processing (BioNLP) (2007) -}, year={2007}, archivePrefix={arXiv}, eprint={0706.1137}, primaryClass={cs.AI} }
bouffier2007automatically
arxiv-515
0706.1141
Multimedia Content Distribution in Hybrid Wireless Networks using Weighted Clustering
<|reference_start|>Multimedia Content Distribution in Hybrid Wireless Networks using Weighted Clustering: Fixed infrastructured networks naturally support centralized approaches for group management and information provisioning. Contrary to infrastructured networks, in multi-hop ad-hoc networks each node acts as a router as well as sender and receiver. Some applications, however, requires hierarchical arrangements that-for practical reasons-has to be done locally and self-organized. An additional challenge is to deal with mobility that causes permanent network partitioning and re-organizations. Technically, these problems can be tackled by providing additional uplinks to a backbone network, which can be used to access resources in the Internet as well as to inter-link multiple ad-hoc network partitions, creating a hybrid wireless network. In this paper, we present a prototypically implemented hybrid wireless network system optimized for multimedia content distribution. To efficiently manage the ad-hoc communicating devices a weighted clustering algorithm is introduced. The proposed localized algorithm deals with mobility, but does not require geographical information or distances.<|reference_end|>
arxiv
@article{andronache2007multimedia, title={Multimedia Content Distribution in Hybrid Wireless Networks using Weighted Clustering}, author={Adrian Andronache, Matthias R. Brust, Steffen Rothkugel}, journal={arXiv preprint arXiv:0706.1141}, year={2007}, archivePrefix={arXiv}, eprint={0706.1141}, primaryClass={cs.MM cs.NI} }
andronache2007multimedia
arxiv-516
0706.1142
Localized Support for Injection Point Election in Hybrid Networks
<|reference_start|>Localized Support for Injection Point Election in Hybrid Networks: Ad-hoc networks, a promising trend in wireless technology, fail to work properly in a global setting. In most cases, self-organization and cost-free local communication cannot compensate the need for being connected, gathering urgent information just-in-time. Equipping mobile devices additionally with GSM or UMTS adapters in order to communicate with arbitrary remote devices or even a fixed network infrastructure provides an opportunity. Devices that operate as intermediate nodes between the ad-hoc network and a reliable backbone network are potential injection points. They allow disseminating received information within the local neighborhood. The effectiveness of different devices to serve as injection point differs substantially. For practical reasons the determination of injection points should be done locally, within the ad-hoc network partitions. We analyze different localized algorithms using at most 2-hop neighboring information. Results show that devices selected this way spread information more efficiently through the ad-hoc network. Our results can also be applied in order to support the election process for clusterheads in the field of clustering mechanisms.<|reference_end|>
arxiv
@article{brust2007localized, title={Localized Support for Injection Point Election in Hybrid Networks}, author={Matthias R. Brust, Steffen Rothkugel}, journal={arXiv preprint arXiv:0706.1142}, year={2007}, archivePrefix={arXiv}, eprint={0706.1142}, primaryClass={cs.DC cs.NI} }
brust2007localized
arxiv-517
0706.1151
A taxonomic Approach to Topology Control in Ad-hoc and Wireless Networks
<|reference_start|>A taxonomic Approach to Topology Control in Ad-hoc and Wireless Networks: Topology Control (TC) aims at tuning the topology of highly dynamic networks to provide better control over network resources and to increase the efficiency of communication. Recently, many TC protocols have been proposed. The protocols are designed for preserving connectivity, minimizing energy consumption, maximizing the overall network coverage or network capacity. Each TC protocol makes different assumptions about the network topology, environment detection resources, and control capacities. This circumstance makes it extremely difficult to comprehend the role and purpose of each protocol. To tackle this situation, a taxonomy for TC protocols is presented throughout this paper. Additionally, some TC protocols are classified based upon this taxonomy.<|reference_end|>
arxiv
@article{brust2007a, title={A taxonomic Approach to Topology Control in Ad-hoc and Wireless Networks}, author={Matthias R. Brust, Steffen Rothkugel}, journal={arXiv preprint arXiv:0706.1151}, year={2007}, archivePrefix={arXiv}, eprint={0706.1151}, primaryClass={cs.NI cs.DC} }
brust2007a
arxiv-518
0706.1162
The multiple viewpoints as approach to information retrieval within collaborative development context
<|reference_start|>The multiple viewpoints as approach to information retrieval within collaborative development context: Nowadays, to achieve competitive advantage, the industrial companies are considering that success is sustained to great product development. That is to manage the product throughout its entire lifecycle. Achieving this goal requires a tight collaboration between actors from a wide variety of domains, using different software tools producing various product data types and formats. The actors' collaboration is mainly based on the exchange /share product information. The representation of the actors' viewpoints is the underlying requirement of the collaborative product development. The multiple viewpoints approach was designed to provide an organizational framework following the actors' perspectives in the collaboration, and their relationships. The approach acknowledges the inevitability of multiple integration of product information as different views, promotes gathering of actors' interest, and encourages retrieved adequate information while providing support for integration through PLM and/or SCM collaboration. In this paper, a multiple viewpoints representation is proposed. The product, process, organization information models are discussed. A series of issues referring to the viewpoints representation are discussed in detail. Based on XML standard, taking electrical connector as an example, an application case of part of product information modeling is stated.<|reference_end|>
arxiv
@article{geryville2007the, title={The multiple viewpoints as approach to information retrieval within collaborative development context}, author={Hichem Geryville (LIESP), Yacine Ouzrout (LIESP), Abdelaziz Bouras (LIESP), Nikolaos Sapidis}, journal={Dans Information and Communication Technologies International Symposium - Information and Communication Technologies International Symposium (Proceedings of IEEE), Fez : Maroc (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0706.1162}, primaryClass={cs.HC} }
geryville2007the
arxiv-519
0706.1169
Vector Precoding for Wireless MIMO Systems: A Replica Analysis
<|reference_start|>Vector Precoding for Wireless MIMO Systems: A Replica Analysis: We apply the replica method to analyze vector precoding, a method to reduce transmit power in antenna array communications. The analysis applies to a very general class of channel matrices. The statistics of the channel matrix enter the transmitted energy per symbol via its R-transform. We find that vector precoding performs much better for complex than for real alphabets. As a byproduct, we find a nonlinear precoding method with polynomial complexity that outperforms NP-hard Tomlinson-Harashima precoding for binary modulation on complex channels if the number of transmit antennas is slightly larger than twice the number of receive antennas.<|reference_end|>
arxiv
@article{mueller2007vector, title={Vector Precoding for Wireless MIMO Systems: A Replica Analysis}, author={Ralf R. Mueller, Dongning Guo, and Aris L. Moustakas}, journal={arXiv preprint arXiv:0706.1169}, year={2007}, doi={10.1109/JSAC.2008.080411}, archivePrefix={arXiv}, eprint={0706.1169}, primaryClass={cs.IT cond-mat.stat-mech math.IT} }
mueller2007vector
arxiv-520
0706.1179
Collaborative product and process model: Multiple Viewpoints approach
<|reference_start|>Collaborative product and process model: Multiple Viewpoints approach: The design and development of complex products invariably involves many actors who have different points of view on the problem they are addressing, the product being developed, and the process by which it is being developed. The actors' viewpoints approach was designed to provide an organisational framework in which these different perspectives or points of views, and their relationships, could be explicitly gathered and formatted (by actor activity's focus). The approach acknowledges the inevitability of multiple interpretation of product information as different views, promotes gathering of actors' interests, and encourages retrieved adequate information while providing support for integration through PLM and/or SCM collaboration. In this paper, we present our multiple viewpoints approach, and we illustrate it by an industrial example on cyclone vessel product.<|reference_end|>
arxiv
@article{geryville2007collaborative, title={Collaborative product and process model: Multiple Viewpoints approach}, author={Hichem Geryville (LIESP), Abdelaziz Bouras (LIESP), Yacine Ouzrout (LIESP), Nikolaos Sapidis}, journal={Innovative Products and Services through Collaborative Networks (2006) 542}, year={2007}, doi={10.1000/ISBN0-85358-228-9}, archivePrefix={arXiv}, eprint={0706.1179}, primaryClass={cs.OH cs.IR} }
geryville2007collaborative
arxiv-521
0706.1201
Developing a Collaborative and Autonomous Training and Learning Environment for Hybrid Wireless Networks
<|reference_start|>Developing a Collaborative and Autonomous Training and Learning Environment for Hybrid Wireless Networks: With larger memory capacities and the ability to link into wireless networks, more and more students uses palmtop and handheld computers for learning activities. However, existing software for Web-based learning is not well-suited for such mobile devices, both due to constrained user interfaces as well as communication effort required. A new generation of applications for the learning domain that is explicitly designed to work on these kinds of small mobile devices has to be developed. For this purpose, we introduce CARLA, a cooperative learning system that is designed to act in hybrid wireless networks. As a cooperative environment, CARLA aims at disseminating teaching material, notes, and even components of itself through both fixed and mobile networks to interested nodes. Due to the mobility of nodes, CARLA deals with upcoming problems such as network partitions and synchronization of teaching material, resource dependencies, and time constraints.<|reference_end|>
arxiv
@article{lobo2007developing, title={Developing a Collaborative and Autonomous Training and Learning Environment for Hybrid Wireless Networks}, author={Jose Eduardo M. Lobo, Jorge Luis Risco Becerra, Matthias R. Brust, Steffen Rothkugel, Christian M. Adriano}, journal={arXiv preprint arXiv:0706.1201}, year={2007}, archivePrefix={arXiv}, eprint={0706.1201}, primaryClass={cs.CY cs.HC cs.NI} }
lobo2007developing
arxiv-522
0706.1290
Temporal Reasoning without Transitive Tables
<|reference_start|>Temporal Reasoning without Transitive Tables: Representing and reasoning about qualitative temporal information is an essential part of many artificial intelligence tasks. Lots of models have been proposed in the litterature for representing such temporal information. All derive from a point-based or an interval-based framework. One fundamental reasoning task that arises in applications of these frameworks is given by the following scheme: given possibly indefinite and incomplete knowledge of the binary relationships between some temporal objects, find the consistent scenarii between all these objects. All these models require transitive tables -- or similarly inference rules-- for solving such tasks. We have defined an alternative model, S-languages - to represent qualitative temporal information, based on the only two relations of \emph{precedence} and \emph{simultaneity}. In this paper, we show how this model enables to avoid transitive tables or inference rules to handle this kind of problem.<|reference_end|>
arxiv
@article{schwer2007temporal, title={Temporal Reasoning without Transitive Tables}, author={Sylviane R. Schwer (LIPN)}, journal={arXiv preprint arXiv:0706.1290}, year={2007}, archivePrefix={arXiv}, eprint={0706.1290}, primaryClass={cs.AI} }
schwer2007temporal
arxiv-523
0706.1318
Constructing a maximum utility slate of on-line advertisements
<|reference_start|>Constructing a maximum utility slate of on-line advertisements: We present an algorithm for constructing an optimal slate of sponsored search advertisements which respects the ordering that is the outcome of a generalized second price auction, but which must also accommodate complicating factors such as overall budget constraints. The algorithm is easily fast enough to use on the fly for typical problem sizes, or as a subroutine in an overall optimization.<|reference_end|>
arxiv
@article{keerthi2007constructing, title={Constructing a maximum utility slate of on-line advertisements}, author={S. Sathiya Keerthi and John A. Tomlin}, journal={arXiv preprint arXiv:0706.1318}, year={2007}, number={YR-2007-001}, archivePrefix={arXiv}, eprint={0706.1318}, primaryClass={cs.DM cs.DS} }
keerthi2007constructing
arxiv-524
0706.1395
Opportunistic Network Coding for Video Streaming over Wireless
<|reference_start|>Opportunistic Network Coding for Video Streaming over Wireless: In this paper, we study video streaming over wireless networks with network coding capabilities. We build upon recent work, which demonstrated that network coding can increase throughput over a broadcast medium, by mixing packets from different flows into a single packet, thus increasing the information content per transmission. Our key insight is that, when the transmitted flows are video streams, network codes should be selected so as to maximize not only the network throughput but also the video quality. We propose video-aware opportunistic network coding schemes that take into account both (i) the decodability of network codes by several receivers and (ii) the importance and deadlines of video packets. Simulation results show that our schemes significantly improve both video quality and throughput.<|reference_end|>
arxiv
@article{seferoglu2007opportunistic, title={Opportunistic Network Coding for Video Streaming over Wireless}, author={Hulya Seferoglu, Athina Markopoulou}, journal={arXiv preprint arXiv:0706.1395}, year={2007}, archivePrefix={arXiv}, eprint={0706.1395}, primaryClass={cs.NI} }
seferoglu2007opportunistic
arxiv-525
0706.1399
Duality and Stability Regions of Multi-rate Broadcast and Multiple Access Networks
<|reference_start|>Duality and Stability Regions of Multi-rate Broadcast and Multiple Access Networks: We characterize stability regions of two-user fading Gaussian multiple access (MAC) and broadcast (BC) networks with centralized scheduling. The data to be transmitted to the users is encoded into codewords of fixed length. The rates of the codewords used are restricted to a fixed set of finite cardinality. With successive decoding and interference cancellation at the receivers, we find the set of arrival rates that can be stabilized over the MAC and BC networks. In MAC and BC networks with average power constraints, we observe that the duality property that relates the MAC and BC information theoretic capacity regions extend to their stability regions as well. In MAC and BC networks with peak power constraints, the union of stability regions of dual MAC networks is found to be strictly contained in the BC stability region.<|reference_end|>
arxiv
@article{cadambe2007duality, title={Duality and Stability Regions of Multi-rate Broadcast and Multiple Access Networks}, author={Viveck R. Cadambe and Syed A. Jafar}, journal={arXiv preprint arXiv:0706.1399}, year={2007}, archivePrefix={arXiv}, eprint={0706.1399}, primaryClass={cs.IT math.IT} }
cadambe2007duality
arxiv-526
0706.1402
Analyzing Design Process and Experiments on the AnITA Generic Tutoring System
<|reference_start|>Analyzing Design Process and Experiments on the AnITA Generic Tutoring System: In the field of tutoring systems, investigations have shown that there are many tutoring systems specific to a specific domain that, because of their static architecture, cannot be adapted to other domains. As consequence, often neither methods nor knowledge can be reused. In addition, the knowledge engineer must have programming skills in order to enhance and evaluate the system. One particular challenge is to tackle these problems with the development of a generic tutoring system. AnITA, as a stand-alone application, has been developed and implemented particularly for this purpose. However, in the testing phase, we discovered that this architecture did not fully match the user's intuitive understanding of the use of a learning tool. Therefore, AnITA has been redesigned to exclusively work as a client/server application and renamed to AnITA2. This paper discusses the evolvements made on the AnITA tutoring system, the goal of which is to use generic principles for system re-use in any domain. Two experiments were conducted, and the results are presented in this paper.<|reference_end|>
arxiv
@article{brust2007analyzing, title={Analyzing Design Process and Experiments on the AnITA Generic Tutoring System}, author={Matthias R. Brust, Steffen Rothkugel}, journal={arXiv preprint arXiv:0706.1402}, year={2007}, archivePrefix={arXiv}, eprint={0706.1402}, primaryClass={cs.CY cs.HC} }
brust2007analyzing
arxiv-527
0706.1409
A Proof of a Recursion for Bessel Moments
<|reference_start|>A Proof of a Recursion for Bessel Moments: We provide a proof of a conjecture in (Bailey, Borwein, Borwein, Crandall 2007) on the existence and form of linear recursions for moments of powers of the Bessel function $K_0$.<|reference_end|>
arxiv
@article{borwein2007a, title={A Proof of a Recursion for Bessel Moments}, author={Jonathan M. Borwein, Bruno Salvy (INRIA Rocquencourt)}, journal={arXiv preprint arXiv:0706.1409}, year={2007}, doi={10.1080/10586458.2008.10129032}, archivePrefix={arXiv}, eprint={0706.1409}, primaryClass={cs.SC math.CA} }
borwein2007a
arxiv-528
0706.1410
Evolutionary Mesh Numbering: Preliminary Results
<|reference_start|>Evolutionary Mesh Numbering: Preliminary Results: Mesh numbering is a critical issue in Finite Element Methods, as the computational cost of one analysis is highly dependent on the order of the nodes of the mesh. This paper presents some preliminary investigations on the problem of mesh numbering using Evolutionary Algorithms. Three conclusions can be drawn from these experiments. First, the results of the up-to-date method used in all FEM softwares (Gibb's method) can be consistently improved; second, none of the crossover operators tried so far (either general or problem specific) proved useful; third, though the general tendency in Evolutionary Computation seems to be the hybridization with other methods (deterministic or heuristic), none of the presented attempt did encounter any success yet. The good news, however, is that this algorithm allows an improvement over the standard heuristic method between 12% and 20% for both the 1545 and 5453-nodes meshes used as test-bed. Finally, some strange interaction between the selection scheme and the use of problem specific mutation operator was observed, which appeals for further investigation.<|reference_end|>
arxiv
@article{sourd2007evolutionary, title={Evolutionary Mesh Numbering: Preliminary Results}, author={Francis Sourd (CMAP), Marc Schoenauer (CMAP)}, journal={Dans Adaptive Computing in Design and Manufacture, ACDM'98 (1998) 137-150}, year={2007}, archivePrefix={arXiv}, eprint={0706.1410}, primaryClass={cs.NA cs.NE math.NA math.OC} }
sourd2007evolutionary
arxiv-529
0706.1456
A Generic Model of Contracts for Embedded Systems
<|reference_start|>A Generic Model of Contracts for Embedded Systems: We present the mathematical foundations of the contract-based model developed in the framework of the SPEEDS project. SPEEDS aims at developing methods and tools to support "speculative design", a design methodology in which distributed designers develop different aspects of the overall system, in a concurrent but controlled way. Our generic mathematical model of contract supports this style of development. This is achieved by focusing on behaviors, by supporting the notion of "rich component" where diverse (functional and non-functional) aspects of the system can be considered and combined, by representing rich components via their set of associated contracts, and by formalizing the whole process of component composition.<|reference_end|>
arxiv
@article{benveniste2007a, title={A Generic Model of Contracts for Embedded Systems}, author={Albert Benveniste (IRISA), Benoit Caillaud (IRISA), Roberto Passerone}, journal={arXiv preprint arXiv:0706.1456}, year={2007}, archivePrefix={arXiv}, eprint={0706.1456}, primaryClass={cs.SE} }
benveniste2007a
arxiv-530
0706.1477
VPSPACE and a transfer theorem over the complex field
<|reference_start|>VPSPACE and a transfer theorem over the complex field: We extend the transfer theorem of [KP2007] to the complex field. That is, we investigate the links between the class VPSPACE of families of polynomials and the Blum-Shub-Smale model of computation over C. Roughly speaking, a family of polynomials is in VPSPACE if its coefficients can be computed in polynomial space. Our main result is that if (uniform, constant-free) VPSPACE families can be evaluated efficiently then the class PAR of decision problems that can be solved in parallel polynomial time over the complex field collapses to P. As a result, one must first be able to show that there are VPSPACE families which are hard to evaluate in order to separate P from NP over C, or even from PAR.<|reference_end|>
arxiv
@article{koiran2007vpspace, title={VPSPACE and a transfer theorem over the complex field}, author={Pascal Koiran (LIP), Sylvain Perifel (LIP)}, journal={arXiv preprint arXiv:0706.1477}, year={2007}, archivePrefix={arXiv}, eprint={0706.1477}, primaryClass={cs.CC} }
koiran2007vpspace
arxiv-531
0706.1563
Optimal Choice of Threshold in Two Level Processor Sharing
<|reference_start|>Optimal Choice of Threshold in Two Level Processor Sharing: We analyze the Two Level Processor Sharing (TLPS) scheduling discipline with the hyper-exponential job size distribution and with the Poisson arrival process. TLPS is a convenient model to study the benefit of the file size based differentiation in TCP/IP networks. In the case of the hyper-exponential job size distribution with two phases, we find a closed form analytic expression for the expected sojourn time and an approximation for the optimal value of the threshold that minimizes the expected sojourn time. In the case of the hyper-exponential job size distribution with more than two phases, we derive a tight upper bound for the expected sojourn time conditioned on the job size. We show that when the variance of the job size distribution increases, the gain in system performance increases and the sensitivity to the choice of the threshold near its optimal value decreases.<|reference_end|>
arxiv
@article{avrachenkov2007optimal, title={Optimal Choice of Threshold in Two Level Processor Sharing}, author={Konstantin Avrachenkov (INRIA Sophia Antipolis), Patrick Brown (FT R&D), Natalia Osipova (INRIA Sophia Antipolis)}, journal={arXiv preprint arXiv:0706.1563}, year={2007}, archivePrefix={arXiv}, eprint={0706.1563}, primaryClass={cs.NI} }
avrachenkov2007optimal
arxiv-532
0706.1588
Detection of Gauss-Markov Random Fields with Nearest-Neighbor Dependency
<|reference_start|>Detection of Gauss-Markov Random Fields with Nearest-Neighbor Dependency: The problem of hypothesis testing against independence for a Gauss-Markov random field (GMRF) is analyzed. Assuming an acyclic dependency graph, an expression for the log-likelihood ratio of detection is derived. Assuming random placement of nodes over a large region according to the Poisson or uniform distribution and nearest-neighbor dependency graph, the error exponent of the Neyman-Pearson detector is derived using large-deviations theory. The error exponent is expressed as a dependency-graph functional and the limit is evaluated through a special law of large numbers for stabilizing graph functionals. The exponent is analyzed for different values of the variance ratio and correlation. It is found that a more correlated GMRF has a higher exponent at low values of the variance ratio whereas the situation is reversed at high values of the variance ratio.<|reference_end|>
arxiv
@article{anandkumar2007detection, title={Detection of Gauss-Markov Random Fields with Nearest-Neighbor Dependency}, author={Animashree Anandkumar, Lang Tong, Ananthram Swami}, journal={vol. 55, no. 2, Feb. 2009}, year={2007}, doi={10.1109/TIT.2008.2009855}, archivePrefix={arXiv}, eprint={0706.1588}, primaryClass={cs.IT math.IT} }
anandkumar2007detection
arxiv-533
0706.1614
Non-Cooperative Scheduling of Multiple Bag-of-Task Applications
<|reference_start|>Non-Cooperative Scheduling of Multiple Bag-of-Task Applications: Multiple applications that execute concurrently on heterogeneous platforms compete for CPU and network resources. In this paper we analyze the behavior of $K$ non-cooperative schedulers using the optimal strategy that maximize their efficiency while fairness is ensured at a system level ignoring applications characteristics. We limit our study to simple single-level master-worker platforms and to the case where each scheduler is in charge of a single application consisting of a large number of independent tasks. The tasks of a given application all have the same computation and communication requirements, but these requirements can vary from one application to another. In this context, we assume that each scheduler aims at maximizing its throughput. We give closed-form formula of the equilibrium reached by such a system and study its performance. We characterize the situations where this Nash equilibrium is optimal (in the Pareto sense) and show that even though no catastrophic situation (Braess-like paradox) can occur, such an equilibrium can be arbitrarily bad for any classical performance measure.<|reference_end|>
arxiv
@article{legrand2007non-cooperative, title={Non-Cooperative Scheduling of Multiple Bag-of-Task Applications}, author={Arnaud Legrand (INRIA Rh^one-Alpes / Id-Imag, Lig), Corinne Touati (INRIA Rh^one-Alpes / Id-Imag, Lig)}, journal={Dans INFOCOM (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0706.1614}, primaryClass={cs.DC cs.GT} }
legrand2007non-cooperative
arxiv-534
0706.1617
Relative Strength of Strategy Elimination Procedures
<|reference_start|>Relative Strength of Strategy Elimination Procedures: We compare here the relative strength of four widely used procedures on finite strategic games: iterated elimination of weakly/strictly dominated strategies by a pure/mixed strategy. A complication is that none of these procedures is based on a monotonic operator. To deal with this problem we use 'global' versions of these operators.<|reference_end|>
arxiv
@article{apt2007relative, title={Relative Strength of Strategy Elimination Procedures}, author={Krzysztof R. Apt}, journal={Economics Bulletin, Vol. 3, no. 21, pp. 1-9 (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0706.1617}, primaryClass={cs.GT} }
apt2007relative
arxiv-535
0706.1642
On the growth of components with non fixed excesses
<|reference_start|>On the growth of components with non fixed excesses: Denote by an $l$-component a connected graph with $l$ edges more than vertices. We prove that the expected number of creations of $(l+1)$-component, by means of adding a new edge to an $l$-component in a randomly growing graph with $n$ vertices, tends to 1 as $l,n$ tends to $\infty$ but with $l = o(n^{1/4})$. We also show, under the same conditions on $l$ and $n$, that the expected number of vertices that ever belong to an $l$-component is $\sim (12l)^{1/3} n^{2/3}$.<|reference_end|>
arxiv
@article{baert2007on, title={On the growth of components with non fixed excesses}, author={Anne-Elisabeth Baert (LaRIA), Vlady Ravelomanana (LIPN), Lo"ys Thimonier (LaRIA)}, journal={Discrete Applied Mathematics 130, 3 (17/07/2003) 487--493}, year={2007}, doi={10.1016/S0166-218X(03)00326-3}, archivePrefix={arXiv}, eprint={0706.1642}, primaryClass={cs.DM math.CO} }
baert2007on
arxiv-536
0706.1665
Another Proof of Wright's Inequalities
<|reference_start|>Another Proof of Wright's Inequalities: We present a short way of proving the inequalities obtained by Wright in [Journal of Graph Theory, 4: 393 - 407 (1980)] concerning the number of connected graphs with $\ell$ edges more than vertices.<|reference_end|>
arxiv
@article{ravelomanana2007another, title={Another Proof of Wright's Inequalities}, author={Vlady Ravelomanana (LIPN)}, journal={arXiv preprint arXiv:0706.1665}, year={2007}, archivePrefix={arXiv}, eprint={0706.1665}, primaryClass={cs.DM math.CO} }
ravelomanana2007another
arxiv-537
0706.1692
A Methodology for Efficient Space-Time Adapter Design Space Exploration: A Case Study of an Ultra Wide Band Interleaver
<|reference_start|>A Methodology for Efficient Space-Time Adapter Design Space Exploration: A Case Study of an Ultra Wide Band Interleaver: This paper presents a solution to efficiently explore the design space of communication adapters. In most digital signal processing (DSP) applications, the overall architecture of the system is significantly affected by communication architecture, so the designers need specifically optimized adapters. By explicitly modeling these communications within an effective graph-theoretic model and analysis framework, we automatically generate an optimized architecture, named Space-Time AdapteR (STAR). Our design flow inputs a C description of Input/Output data scheduling, and user requirements (throughput, latency, parallelism...), and formalizes communication constraints through a Resource Constraints Graph (RCG). The RCG properties enable an efficient architecture space exploration in order to synthesize a STAR component. The proposed approach has been tested to design an industrial data mixing block example: an Ultra-Wideband interleaver.<|reference_end|>
arxiv
@article{chavet2007a, title={A Methodology for Efficient Space-Time Adapter Design Space Exploration: A Case Study of an Ultra Wide Band Interleaver}, author={Cyrille Chavet (LESTER, STM), Philippe Coussy (LESTER), Pascal Urard (STM), Eric Martin (LESTER)}, journal={Proceedings of the IEEE International Symposium on Circuits and Systems (ISCAS) (28/05/2007) 2946}, year={2007}, archivePrefix={arXiv}, eprint={0706.1692}, primaryClass={cs.AR} }
chavet2007a
arxiv-538
0706.1700
Information Criteria and Arithmetic Codings : An Illustration on Raw Images
<|reference_start|>Information Criteria and Arithmetic Codings : An Illustration on Raw Images: In this paper we give a short theoretical description of the general predictive adaptive arithmetic coding technique. The links between this technique and the works of J. Rissanen in the 80's, in particular the BIC information criterion used in parametrical model selection problems, are established. We also design lossless and lossy coding techniques of images. The lossless technique uses a mix between fixed-length coding and arithmetic coding and provides better compression results than those separate methods. That technique is also seen to have an interesting application in the domain of statistics since it gives a data-driven procedure for the non-parametrical histogram selection problem. The lossy technique uses only predictive adaptive arithmetic codes and shows how a good choice of the order of prediction might lead to better results in terms of compression. We illustrate those coding techniques on a raw grayscale image.<|reference_end|>
arxiv
@article{coq2007information, title={Information Criteria and Arithmetic Codings : An Illustration on Raw Images}, author={Guilhem Coq (1), Olivier Alata (2), Marc Arnaudon (1), Christian Olivier (2) ((1) Laboratoire de Math'ematiques et Applications Poitiers France, (2) Laboratoire Signal Image et Communications Poitiers France)}, journal={arXiv preprint arXiv:0706.1700}, year={2007}, archivePrefix={arXiv}, eprint={0706.1700}, primaryClass={cs.IT math.IT} }
coq2007information
arxiv-539
0706.1716
Modeling and analysis using hybrid Petri nets
<|reference_start|>Modeling and analysis using hybrid Petri nets: This paper is devoted to the use of hybrid Petri nets (PNs) for modeling and control of hybrid dynamic systems (HDS). Modeling, analysis and control of HDS attract ever more of researchers' attention and several works have been devoted to these topics. We consider in this paper the extensions of the PN formalism (initially conceived for modeling and analysis of discrete event systems) in the direction of hybrid modeling. We present, first, the continuous PN models. These models are obtained from discrete PNs by the fluidification of the markings. They constitute the first steps in the extension of PNs toward hybrid modeling. Then, we present two hybrid PN models, which differ in the class of HDS they can deal with. The first one is used for deterministic HDS modeling, whereas the second one can deal with HDS with nondeterministic behavior. Keywords: Hybrid dynamic systems; D-elementary hybrid Petri nets; Hybrid automata; Controller synthesis<|reference_end|>
arxiv
@article{ghomri2007modeling, title={Modeling and analysis using hybrid Petri nets}, author={Lat'efa Ghomri (GIPSA-lab), Hassane Alla (GIPSA-lab)}, journal={Nonlinear Analysis: Hybrid Systems Volume 1, Issue 2 (01/06/2007) Pages 141-153}, year={2007}, archivePrefix={arXiv}, eprint={0706.1716}, primaryClass={cs.IT math.IT} }
ghomri2007modeling
arxiv-540
0706.1751
MacWilliams Identity for Codes with the Rank Metric
<|reference_start|>MacWilliams Identity for Codes with the Rank Metric: The MacWilliams identity, which relates the weight distribution of a code to the weight distribution of its dual code, is useful in determining the weight distribution of codes. In this paper, we derive the MacWilliams identity for linear codes with the rank metric, and our identity has a different form than that by Delsarte. Using our MacWilliams identity, we also derive related identities for rank metric codes. These identities parallel the binomial and power moment identities derived for codes with the Hamming metric.<|reference_end|>
arxiv
@article{gadouleau2007macwilliams, title={MacWilliams Identity for Codes with the Rank Metric}, author={Maximilien Gadouleau and Zhiyuan Yan}, journal={arXiv preprint arXiv:0706.1751}, year={2007}, archivePrefix={arXiv}, eprint={0706.1751}, primaryClass={cs.IT math.IT} }
gadouleau2007macwilliams
arxiv-541
0706.1755
FreeBSD Mandatory Access Control Usage for Implementing Enterprise Security Policies
<|reference_start|>FreeBSD Mandatory Access Control Usage for Implementing Enterprise Security Policies: FreeBSD was one of the first widely deployed free operating systems to provide mandatory access control. It supports a number of classic MAC models. This tutorial paper addresses exploiting this implementation to enforce typical enterprise security policies of varying complexities.<|reference_end|>
arxiv
@article{bolshakov2007freebsd, title={FreeBSD Mandatory Access Control Usage for Implementing Enterprise Security Policies}, author={Kirill Bolshakov (1), Elena Reshetova (1) ((1) Saint-Petersburg State University of Aerospace Instrumentation)}, journal={arXiv preprint arXiv:0706.1755}, year={2007}, archivePrefix={arXiv}, eprint={0706.1755}, primaryClass={cs.CR} }
bolshakov2007freebsd
arxiv-542
0706.1780
Le travail collaboratif dans le cadre d'un projet architectural
<|reference_start|>Le travail collaboratif dans le cadre d'un projet architectural: The analysis of the practices and the tendencies of the users at the time of the search for information on Internet makes it possible to highlight several points. The search for information becomes powerful after knowledge of the typology of the various systems of research. This typology supports the adoption of a methodology of research which one can characterize by pull systems, intelligent agents, etc. In addition, the importance of the structure of the electronic document, correctly elaborated in advance, will support a higher relevance ratio to find information. In our article, the problems turn around the study of the behavior of the users in situation of search for information, as well as the constitution of a pole of documentary resources within a framework of an architectural project. It is noted that the evolution of the documentary resources is related to information technologies.<|reference_end|>
arxiv
@article{ango-obiang2007le, title={Le travail collaboratif dans le cadre d'un projet architectural}, author={Marie-France Ango-Obiang (SITE, Loria)}, journal={Dans Innovation et tradition de l'association internationale Management Strat\'egique (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0706.1780}, primaryClass={cs.HC} }
ango-obiang2007le
arxiv-543
0706.1790
How to measure efficiency?
<|reference_start|>How to measure efficiency?: In the context of applied game theory in networking environments, a number of concepts have been proposed to measure both efficiency and optimality of resource allocations, the most famous certainly being the price of anarchy and the Jain index. Yet, very few have tried to question these measures and compare them one to another, in a general framework, which is the aim of the present article.<|reference_end|>
arxiv
@article{legrand2007how, title={How to measure efficiency?}, author={Arnaud Legrand (INRIA Rh^one-Alpes / ID-IMAG), Corinne Touati (INRIA Rh^one-Alpes / ID-IMAG)}, journal={arXiv preprint arXiv:0706.1790}, year={2007}, archivePrefix={arXiv}, eprint={0706.1790}, primaryClass={cs.GT} }
legrand2007how
arxiv-544
0706.1860
FIPA-based Interoperable Agent Mobility Proposal
<|reference_start|>FIPA-based Interoperable Agent Mobility Proposal: This paper presents a proposal for a flexible agent mobility architecture based on IEEE-FIPA standards and intended to be one of them. This proposal is a first step towards interoperable mobility mechanisms, which are needed for future agent migration between different kinds of platforms. Our proposal is presented as a flexible and robust architecture that has been successfully implemented in the JADE and AgentScape platforms. It is based on an open set of protocols, allowing new protocols and future improvements to be accommodated in the architecture. With this proposal we demonstrate that a standard architecture for agent mobility capable of supporting several agent platforms can be defined and implemented.<|reference_end|>
arxiv
@article{cucurull2007fipa-based, title={FIPA-based Interoperable Agent Mobility Proposal}, author={Jordi Cucurull, Ramon Marti, Sergi Robles, Joan Borrell, Guillermo Navarro}, journal={arXiv preprint arXiv:0706.1860}, year={2007}, archivePrefix={arXiv}, eprint={0706.1860}, primaryClass={cs.MA cs.NI} }
cucurull2007fipa-based
arxiv-545
0706.1926
Towards understanding and modelling office daily life
<|reference_start|>Towards understanding and modelling office daily life: Measuring and modeling human behavior is a very complex task. In this paper we present our initial thoughts on modeling and automatic recognition of some human activities in an office. We argue that to successfully model human activities, we need to consider both individual behavior and group dynamics. To demonstrate these theoretical approaches, we introduce an experimental system for analyzing everyday activity in our office.<|reference_end|>
arxiv
@article{bezzi2007towards, title={Towards understanding and modelling office daily life}, author={Michele Bezzi, Robin Groenevelt}, journal={arXiv preprint arXiv:0706.1926}, year={2007}, archivePrefix={arXiv}, eprint={0706.1926}, primaryClass={cs.CV cs.CY} }
bezzi2007towards
arxiv-546
0706.2010
Information-theoretic security without an honest majority
<|reference_start|>Information-theoretic security without an honest majority: We present six multiparty protocols with information-theoretic security that tolerate an arbitrary number of corrupt participants. All protocols assume pairwise authentic private channels and a broadcast channel (in a single case, we require a simultaneous broadcast channel). We give protocols for veto, vote, anonymous bit transmission, collision detection, notification and anonymous message transmission. Not assuming an honest majority, in most cases, a single corrupt participant can make the protocol abort. All protocols achieve functionality never obtained before without the use of either computational assumptions or of an honest majority.<|reference_end|>
arxiv
@article{broadbent2007information-theoretic, title={Information-theoretic security without an honest majority}, author={Anne Broadbent and Alain Tapp}, journal={Proceedings of ASIACRYPT 2007 pp. 410-426}, year={2007}, doi={10.1007/978-3-540-76900-2_25}, archivePrefix={arXiv}, eprint={0706.2010}, primaryClass={cs.CR} }
broadbent2007information-theoretic
arxiv-547
0706.2025
On the Performance Evaluation of Encounter-based Worm Interactions Based on Node Characteristics
<|reference_start|>On the Performance Evaluation of Encounter-based Worm Interactions Based on Node Characteristics: An encounter-based network is a frequently disconnected wireless ad-hoc network requiring nearby neighbors to store and forward data utilizing mobility and encounters over time. Using traditional approaches such as gateways or firewalls for deterring worm propagation in encounter-based networks is inappropriate. Because this type of network is highly dynamic and has no specific boundary, a distributed counter-worm mechanism is needed. We propose models for the worm interaction approach that relies upon automated beneficial worm generation to alleviate problems of worm propagation in such networks. We study and analyze the impact of key mobile node characteristics including node cooperation, immunization, on-off behavior on the worm propagations and interactions. We validate our proposed model using extensive simulations. We also find that, in addition to immunization, cooperation can reduce the level of worm infection. Furthermore, on-off behavior linearly impacts only timing aspect but not the overall infection. Using realistic mobile network measurements, we find that encounters are non-uniform, the trends are consistent with the model but the magnitudes are drastically different. Immunization seems to be the most effective in such scenarios. These findings provide insight that we hope would aid to develop counter-worm protocols in future encounter-based networks.<|reference_end|>
arxiv
@article{tanachaiwiwat2007on, title={On the Performance Evaluation of Encounter-based Worm Interactions Based on Node Characteristics}, author={Sapon Tanachaiwiwat, Ahmed Helmy}, journal={arXiv preprint arXiv:0706.2025}, year={2007}, archivePrefix={arXiv}, eprint={0706.2025}, primaryClass={cs.CR cs.NI} }
tanachaiwiwat2007on
arxiv-548
0706.2033
Power Allocation for Discrete-Input Delay-Limited Fading Channels
<|reference_start|>Power Allocation for Discrete-Input Delay-Limited Fading Channels: We consider power allocation algorithms for fixed-rate transmission over Nakagami-m non-ergodic block-fading channels with perfect transmitter and receiver channel state information and discrete input signal constellations, under both short- and long-term power constraints. Optimal power allocation schemes are shown to be direct applications of previous results in the literature. We show that the SNR exponent of the optimal short-term scheme is given by m times the Singleton bound. We also illustrate the significant gains available by employing long-term power constraints. In particular, we analyze the optimal long-term solution, showing that zero outage can be achieved provided that the corresponding short-term SNR exponent with the same system parameters is strictly greater than one. Conversely, if the short-term SNR exponent is smaller than one, we show that zero outage cannot be achieved. In this case, we derive the corresponding long-term SNR exponent as a function of the Singleton bound. Due to the nature of the expressions involved, the complexity of optimal schemes may be prohibitive for system implementation. We therefore propose simple sub-optimal power allocation schemes whose outage probability performance is very close to the minimum outage probability obtained by optimal schemes. We also show the applicability of these techniques to practical systems employing orthogonal frequency division multiplexing.<|reference_end|>
arxiv
@article{nguyen2007power, title={Power Allocation for Discrete-Input Delay-Limited Fading Channels}, author={Khoa D. Nguyen, Albert Guillen i Fabregas and Lars K. Rasmussen}, journal={arXiv preprint arXiv:0706.2033}, year={2007}, archivePrefix={arXiv}, eprint={0706.2033}, primaryClass={cs.IT math.IT} }
nguyen2007power
arxiv-549
0706.2035
Critique of Feinstein's Proof that P is not Equal to NP
<|reference_start|>Critique of Feinstein's Proof that P is not Equal to NP: We examine a proof by Craig Alan Feinstein that P is not equal to NP. We present counterexamples to claims made in his paper and expose a flaw in the methodology he uses to make his assertions. The fault in his argument is the incorrect use of reduction. Feinstein makes incorrect assumptions about the complexity of a problem based on the fact that there is a more complex problem that can be used to solve it. His paper introduces the terminology "imaginary processor" to describe how it is possible to beat the brute force reduction he offers to solve the Subset-Sum problem. The claims made in the paper would not be validly established even were imaginary processors to exist.<|reference_end|>
arxiv
@article{sabo2007critique, title={Critique of Feinstein's Proof that P is not Equal to NP}, author={Kyle Sabo, Ryan Schmitt, Michael Silverman}, journal={arXiv preprint arXiv:0706.2035}, year={2007}, archivePrefix={arXiv}, eprint={0706.2035}, primaryClass={cs.CC} }
sabo2007critique
arxiv-550
0706.2040
Getting started in probabilistic graphical models
<|reference_start|>Getting started in probabilistic graphical models: Probabilistic graphical models (PGMs) have become a popular tool for computational analysis of biological data in a variety of domains. But, what exactly are they and how do they work? How can we use PGMs to discover patterns that are biologically relevant? And to what extent can PGMs help us formulate new hypotheses that are testable at the bench? This note sketches out some answers and illustrates the main ideas behind the statistical approach to biological pattern discovery.<|reference_end|>
arxiv
@article{airoldi2007getting, title={Getting started in probabilistic graphical models}, author={Edoardo M Airoldi}, journal={Airoldi EM (2007) Getting started in probabilistic graphical models. PLoS Comput Biol 3(12): e252}, year={2007}, doi={10.1371/journal.pcbi.0030252}, archivePrefix={arXiv}, eprint={0706.2040}, primaryClass={q-bio.QM cs.LG physics.soc-ph stat.ME stat.ML} }
airoldi2007getting
arxiv-551
0706.2069
Building Portable Thread Schedulers for Hierarchical Multiprocessors: the BubbleSched Framework
<|reference_start|>Building Portable Thread Schedulers for Hierarchical Multiprocessors: the BubbleSched Framework: Exploiting full computational power of current more and more hierarchical multiprocessor machines requires a very careful distribution of threads and data among the underlying non-uniform architecture. Unfortunately, most operating systems only provide a poor scheduling API that does not allow applications to transmit valuable scheduling hints to the system. In a previous paper, we showed that using a bubble-based thread scheduler can significantly improve applications' performance in a portable way. However, since multithreaded applications have various scheduling requirements, there is no universal scheduler that could meet all these needs. In this paper, we present a framework that allows scheduling experts to implement and experiment with customized thread schedulers. It provides a powerful API for dynamically distributing bubbles among the machine in a high-level, portable, and efficient way. Several examples show how experts can then develop, debug and tune their own portable bubble schedulers.<|reference_end|>
arxiv
@article{thibault2007building, title={Building Portable Thread Schedulers for Hierarchical Multiprocessors: the BubbleSched Framework}, author={Samuel Thibault (INRIA Futurs), Raymond Namyst (INRIA Futurs), Pierre-Andr'e Wacrenier (INRIA Futurs)}, journal={Dans EuroPar (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0706.2069}, primaryClass={cs.DC} }
thibault2007building
arxiv-552
0706.2073
An Efficient OpenMP Runtime System for Hierarchical Arch
<|reference_start|>An Efficient OpenMP Runtime System for Hierarchical Arch: Exploiting the full computational power of always deeper hierarchical multiprocessor machines requires a very careful distribution of threads and data among the underlying non-uniform architecture. The emergence of multi-core chips and NUMA machines makes it important to minimize the number of remote memory accesses, to favor cache affinities, and to guarantee fast completion of synchronization steps. By using the BubbleSched platform as a threading backend for the GOMP OpenMP compiler, we are able to easily transpose affinities of thread teams into scheduling hints using abstractions called bubbles. We then propose a scheduling strategy suited to nested OpenMP parallelism. The resulting preliminary performance evaluations show an important improvement of the speedup on a typical NAS OpenMP benchmark application.<|reference_end|>
arxiv
@article{thibault2007an, title={An Efficient OpenMP Runtime System for Hierarchical Arch}, author={Samuel Thibault (INRIA Futurs), Franc{c}ois Broquedis (INRIA Futurs), Brice Goglin (INRIA Futurs), Raymond Namyst (INRIA Futurs), Pierre-Andr'e Wacrenier (INRIA Futurs)}, journal={Dans International Workshop on OpenMP (IWOMP) (2007)}, year={2007}, archivePrefix={arXiv}, eprint={0706.2073}, primaryClass={cs.PL} }
thibault2007an
arxiv-553
0706.2076
A Finite Semantics of Simply-Typed Lambda Terms for Infinite Runs of<br> Automata
<|reference_start|>A Finite Semantics of Simply-Typed Lambda Terms for Infinite Runs of<br> Automata: Model checking properties are often described by means of finite automata. Any particular such automaton divides the set of infinite trees into finitely many classes, according to which state has an infinite run. Building the full type hierarchy upon this interpretation of the base type gives a finite semantics for simply-typed lambda-trees. A calculus based on this semantics is proven sound and complete. In particular, for regular infinite lambda-trees it is decidable whether a given automaton has a run or not. As regular lambda-trees are precisely recursion schemes, this decidability result holds for arbitrary recursion schemes of arbitrary level, without any syntactical restriction.<|reference_end|>
arxiv
@article{aehlig2007a, title={A Finite Semantics of Simply-Typed Lambda Terms for Infinite Runs of<br> Automata}, author={Klaus Aehlig}, journal={Logical Methods in Computer Science, Volume 3, Issue 3 (July 4, 2007) lmcs:1232}, year={2007}, doi={10.2168/LMCS-3(3:1)2007}, archivePrefix={arXiv}, eprint={0706.2076}, primaryClass={cs.LO} }
aehlig2007a
arxiv-554
0706.2146
Efficient Multidimensional Data Redistribution for Resizable Parallel Computations
<|reference_start|>Efficient Multidimensional Data Redistribution for Resizable Parallel Computations: Traditional parallel schedulers running on cluster supercomputers support only static scheduling, where the number of processors allocated to an application remains fixed throughout the execution of the job. This results in under-utilization of idle system resources thereby decreasing overall system throughput. In our research, we have developed a prototype framework called ReSHAPE, which supports dynamic resizing of parallel MPI applications executing on distributed memory platforms. The resizing library in ReSHAPE includes support for releasing and acquiring processors and efficiently redistributing application state to a new set of processors. In this paper, we derive an algorithm for redistributing two-dimensional block-cyclic arrays from $P$ to $Q$ processors, organized as 2-D processor grids. The algorithm ensures a contention-free communication schedule for data redistribution if $P_r \leq Q_r$ and $P_c \leq Q_c$. In other cases, the algorithm implements circular row and column shifts on the communication schedule to minimize node contention.<|reference_end|>
arxiv
@article{sudarsan2007efficient, title={Efficient Multidimensional Data Redistribution for Resizable Parallel Computations}, author={Rajesh Sudarsan and Calvin J. Ribbens}, journal={arXiv preprint arXiv:0706.2146}, year={2007}, archivePrefix={arXiv}, eprint={0706.2146}, primaryClass={cs.DC} }
sudarsan2007efficient
arxiv-555
0706.2153
Stability of boundary measures
<|reference_start|>Stability of boundary measures: We introduce the boundary measure at scale r of a compact subset of the n-dimensional Euclidean space. We show how it can be computed for point clouds and suggest these measures can be used for feature detection. The main contribution of this work is the proof a quantitative stability theorem for boundary measures using tools of convex analysis and geometric measure theory. As a corollary we obtain a stability result for Federer's curvature measures of a compact, allowing to compute them from point-cloud approximations of the compact.<|reference_end|>
arxiv
@article{chazal2007stability, title={Stability of boundary measures}, author={Fr'ed'eric Chazal (INRIA Sophia Antipolis), David Cohen-Steiner (INRIA Sophia Antipolis), Quentin M'erigot (INRIA Sophia Antipolis)}, journal={Boundary measures for geometric inference, Found. Comput. Math., 10 (2), pp. 221-240, 2010}, year={2007}, doi={10.1007/s10208-009-9056-2}, archivePrefix={arXiv}, eprint={0706.2153}, primaryClass={cs.CG math.CA math.MG} }
chazal2007stability
arxiv-556
0706.2155
Dualheap Selection Algorithm: Efficient, Inherently Parallel and Somewhat Mysterious
<|reference_start|>Dualheap Selection Algorithm: Efficient, Inherently Parallel and Somewhat Mysterious: An inherently parallel algorithm is proposed that efficiently performs selection: finding the K-th largest member of a set of N members. Selection is a common component of many more complex algorithms and therefore is a widely studied problem. Not much is new in the proposed dualheap selection algorithm: the heap data structure is from J.W.J.Williams, the bottom-up heap construction is from R.W. Floyd, and the concept of a two heap data structure is from J.W.J. Williams and D.E. Knuth. The algorithm's novelty is limited to a few relatively minor implementation twists: 1) the two heaps are oriented with their roots at the partition values rather than at the minimum and maximum values, 2)the coding of one of the heaps (the heap of smaller values) employs negative indexing, and 3) the exchange phase of the algorithm is similar to a bottom-up heap construction, but navigates the heap with a post-order tree traversal. When run on a single processor, the dualheap selection algorithm's performance is competitive with quickselect with median estimation, a common variant of C.A.R. Hoare's quicksort algorithm. When run on parallel processors, the dualheap selection algorithm is superior due to its subtasks that are easily partitioned and innately balanced.<|reference_end|>
arxiv
@article{sepesi2007dualheap, title={Dualheap Selection Algorithm: Efficient, Inherently Parallel and Somewhat Mysterious}, author={Greg Sepesi}, journal={arXiv preprint arXiv:0706.2155}, year={2007}, archivePrefix={arXiv}, eprint={0706.2155}, primaryClass={cs.DS cs.CC cs.DC} }
sepesi2007dualheap
arxiv-557
0706.2293
Resource control of object-oriented programs
<|reference_start|>Resource control of object-oriented programs: A sup-interpretation is a tool which provides an upper bound on the size of a value computed by some symbol of a program. Sup-interpretations have shown their interest to deal with the complexity of first order functional programs. For instance, they allow to characterize all the functions bitwise computable in Alogtime. This paper is an attempt to adapt the framework of sup-interpretations to a fragment of oriented-object programs, including distinct encodings of numbers through the use of constructor symbols, loop and while constructs and non recursive methods with side effects. We give a criterion, called brotherly criterion, which ensures that each brotherly program computes objects whose size is polynomially bounded by the inputs sizes.<|reference_end|>
arxiv
@article{marion2007resource, title={Resource control of object-oriented programs}, author={Jean-Yves Marion and Romain Pechoux}, journal={arXiv preprint arXiv:0706.2293}, year={2007}, archivePrefix={arXiv}, eprint={0706.2293}, primaryClass={cs.PL cs.LO} }
marion2007resource
arxiv-558
0706.2310
Space-time coding techniques with bit-interleaved coded modulations for MIMO block-fading channels
<|reference_start|>Space-time coding techniques with bit-interleaved coded modulations for MIMO block-fading channels: The space-time bit-interleaved coded modulation (ST-BICM) is an efficient technique to obtain high diversity and coding gain on a block-fading MIMO channel. Its maximum-likelihood (ML) performance is computed under ideal interleaving conditions, which enables a global optimization taking into account channel coding. Thanks to a diversity upperbound derived from the Singleton bound, an appropriate choice of the time dimension of the space-time coding is possible, which maximizes diversity while minimizing complexity. Based on the analysis, an optimized interleaver and a set of linear precoders, called dispersive nucleo algebraic (DNA) precoders are proposed. The proposed precoders have good performance with respect to the state of the art and exist for any number of transmit antennas and any time dimension. With turbo codes, they exhibit a frame error rate which does not increase with frame length.<|reference_end|>
arxiv
@article{gresset2007space-time, title={Space-time coding techniques with bit-interleaved coded modulations for MIMO block-fading channels}, author={Nicolas Gresset, Loic Brunel, Joseph Boutros}, journal={arXiv preprint arXiv:0706.2310}, year={2007}, doi={10.1109/TIT.2008.920240}, archivePrefix={arXiv}, eprint={0706.2310}, primaryClass={cs.IT math.IT} }
gresset2007space-time
arxiv-559
0706.2331
Pricing American Options for Jump Diffusions by Iterating Optimal Stopping Problems for Diffusions
<|reference_start|>Pricing American Options for Jump Diffusions by Iterating Optimal Stopping Problems for Diffusions: We approximate the price of the American put for jump diffusions by a sequence of functions, which are computed iteratively. This sequence converges to the price function uniformly and exponentially fast. Each element of the approximating sequence solves an optimal stopping problem for geometric Brownian motion, and can be numerically computed using the classical finite difference methods. We prove the convergence of this numerical scheme and present examples to illustrate its performance.<|reference_end|>
arxiv
@article{bayraktar2007pricing, title={Pricing American Options for Jump Diffusions by Iterating Optimal Stopping Problems for Diffusions}, author={Erhan Bayraktar, Hao Xing}, journal={arXiv preprint arXiv:0706.2331}, year={2007}, archivePrefix={arXiv}, eprint={0706.2331}, primaryClass={cs.CE} }
bayraktar2007pricing
arxiv-560
0706.2434
Interference and Outage in Clustered Wireless Ad Hoc Networks
<|reference_start|>Interference and Outage in Clustered Wireless Ad Hoc Networks: In the analysis of large random wireless networks, the underlying node distribution is almost ubiquitously assumed to be the homogeneous Poisson point process. In this paper, the node locations are assumed to form a Poisson clustered process on the plane. We derive the distributional properties of the interference and provide upper and lower bounds for its CCDF. We consider the probability of successful transmission in an interference limited channel when fading is modeled as Rayleigh. We provide a numerically integrable expression for the outage probability and closed-form upper and lower bounds.We show that when the transmitter-receiver distance is large, the success probability is greater than that of a Poisson arrangement. These results characterize the performance of the system under geographical or MAC-induced clustering. We obtain the maximum intensity of transmitting nodes for a given outage constraint, i.e., the transmission capacity (of this spatial arrangement) and show that it is equal to that of a Poisson arrangement of nodes. For the analysis, techniques from stochastic geometry are used, in particular the probability generating functional of Poisson cluster processes, the Palm characterization of Poisson cluster processes and the Campbell-Mecke theorem.<|reference_end|>
arxiv
@article{ganti2007interference, title={Interference and Outage in Clustered Wireless Ad Hoc Networks}, author={RadhaKrishna Ganti and Martin Haenggi}, journal={arXiv preprint arXiv:0706.2434}, year={2007}, doi={10.1109/TIT.2009.2025543}, archivePrefix={arXiv}, eprint={0706.2434}, primaryClass={cs.IT math.IT} }
ganti2007interference
arxiv-561
0706.2479
Progresses in the Analysis of Stochastic 2D Cellular Automata: a Study of Asynchronous 2D Minority
<|reference_start|>Progresses in the Analysis of Stochastic 2D Cellular Automata: a Study of Asynchronous 2D Minority: Cellular automata are often used to model systems in physics, social sciences, biology that are inherently asynchronous. Over the past 20 years, studies have demonstrated that the behavior of cellular automata drastically changed under asynchronous updates. Still, the few mathematical analyses of asynchronism focus on one-dimensional probabilistic cellular automata, either on single examples or on specific classes. As for other classic dynamical systems in physics, extending known methods from one- to two-dimensional systems is a long lasting challenging problem. In this paper, we address the problem of analysing an apparently simple 2D asynchronous cellular automaton: 2D Minority where each cell, when fired, updates to the minority state of its neighborhood. Our experiments reveal that in spite of its simplicity, the minority rule exhibits a quite complex response to asynchronism. By focusing on the fully asynchronous regime, we are however able to describe completely the asymptotic behavior of this dynamics as long as the initial configuration satisfies some natural constraints. Besides these technical results, we have strong reasons to believe that our techniques relying on defining an energy function from the transition table of the automaton may be extended to the wider class of threshold automata.<|reference_end|>
arxiv
@article{regnault2007progresses, title={Progresses in the Analysis of Stochastic 2D Cellular Automata: a Study of Asynchronous 2D Minority}, author={Damien Regnault and Nicolas Schabanel and 'Eric Thierry}, journal={arXiv preprint arXiv:0706.2479}, year={2007}, archivePrefix={arXiv}, eprint={0706.2479}, primaryClass={cs.DM} }
regnault2007progresses
arxiv-562
0706.2520
Analysis of Inter-Domain Traffic Correlations: Random Matrix Theory Approach
<|reference_start|>Analysis of Inter-Domain Traffic Correlations: Random Matrix Theory Approach: The traffic behavior of University of Louisville network with the interconnected backbone routers and the number of Virtual Local Area Network (VLAN) subnets is investigated using the Random Matrix Theory (RMT) approach. We employ the system of equal interval time series of traffic counts at all router to router and router to subnet connections as a representation of the inter-VLAN traffic. The cross-correlation matrix C of the traffic rate changes between different traffic time series is calculated and tested against null-hypothesis of random interactions. The majority of the eigenvalues \lambda_{i} of matrix C fall within the bounds predicted by the RMT for the eigenvalues of random correlation matrices. The distribution of eigenvalues and eigenvectors outside of the RMT bounds displays prominent and systematic deviations from the RMT predictions. Moreover, these deviations are stable in time. The method we use provides a unique possibility to accomplish three concurrent tasks of traffic analysis. The method verifies the uncongested state of the network, by establishing the profile of random interactions. It recognizes the system-specific large-scale interactions, by establishing the profile of stable in time non-random interactions. Finally, by looking into the eigenstatistics we are able to detect and allocate anomalies of network traffic interactions.<|reference_end|>
arxiv
@article{rojkova2007analysis, title={Analysis of Inter-Domain Traffic Correlations: Random Matrix Theory Approach}, author={Viktoria Rojkova, Mehmed Kantardzic}, journal={arXiv preprint arXiv:0706.2520}, year={2007}, archivePrefix={arXiv}, eprint={0706.2520}, primaryClass={cs.NI} }
rojkova2007analysis
arxiv-563
0706.2544
Abstract machines for dialogue games
<|reference_start|>Abstract machines for dialogue games: The notion of abstract Boehm tree has arisen as an operationally-oriented distillation of works on game semantics, and has been investigated in two papers. This paper revisits the notion, providing more syntactic support and more examples (like call-by-value evaluation) illustrating the generality of the underlying computing device. Precise correspondences between various formulations of the evaluation mechanism of abstract Boehm trees are established.<|reference_end|>
arxiv
@article{curien2007abstract, title={Abstract machines for dialogue games}, author={Pierre-Louis Curien (PPS), Hugo Herbelin (INRIA Futurs)}, journal={arXiv preprint arXiv:0706.2544}, year={2007}, archivePrefix={arXiv}, eprint={0706.2544}, primaryClass={cs.LO} }
curien2007abstract
arxiv-564
0706.2575
A new lower bound on the independence number of a graph
<|reference_start|>A new lower bound on the independence number of a graph: For a given connected graph G on n vertices and m edges, we prove that its independence number is at least (2m+n+2-sqrt(sqr(2m+n+2)-16sqr(n)))/8.<|reference_end|>
arxiv
@article{kettani2007a, title={A new lower bound on the independence number of a graph}, author={O.Kettani}, journal={arXiv preprint arXiv:0706.2575}, year={2007}, archivePrefix={arXiv}, eprint={0706.2575}, primaryClass={cs.DM} }
kettani2007a
arxiv-565
0706.2585
Decisive Markov Chains
<|reference_start|>Decisive Markov Chains: We consider qualitative and quantitative verification problems for infinite-state Markov chains. We call a Markov chain decisive w.r.t. a given set of target states F if it almost certainly eventually reaches either F or a state from which F can no longer be reached. While all finite Markov chains are trivially decisive (for every set F), this also holds for many classes of infinite Markov chains. Infinite Markov chains which contain a finite attractor are decisive w.r.t. every set F. In particular, this holds for probabilistic lossy channel systems (PLCS). Furthermore, all globally coarse Markov chains are decisive. This class includes probabilistic vector addition systems (PVASS) and probabilistic noisy Turing machines (PNTM). We consider both safety and liveness problems for decisive Markov chains, i.e., the probabilities that a given set of states F is eventually reached or reached infinitely often, respectively. 1. We express the qualitative problems in abstract terms for decisive Markov chains, and show an almost complete picture of its decidability for PLCS, PVASS and PNTM. 2. We also show that the path enumeration algorithm of Iyer and Narasimha terminates for decisive Markov chains and can thus be used to solve the approximate quantitative safety problem. A modified variant of this algorithm solves the approximate quantitative liveness problem. 3. Finally, we show that the exact probability of (repeatedly) reaching F cannot be effectively expressed (in a uniform way) in Tarski-algebra for either PLCS, PVASS or (P)NTM.<|reference_end|>
arxiv
@article{abdulla2007decisive, title={Decisive Markov Chains}, author={Parosh Aziz Abdulla, Noomene Ben Henda, Richard Mayr}, journal={Logical Methods in Computer Science, Volume 3, Issue 4 (November 8, 2007) lmcs:867}, year={2007}, doi={10.2168/LMCS-3(4:7)2007}, archivePrefix={arXiv}, eprint={0706.2585}, primaryClass={cs.LO cs.DM} }
abdulla2007decisive
arxiv-566
0706.2606
Randomness Extraction via Delta-Biased Masking in the Presence of a Quantum Attacker
<|reference_start|>Randomness Extraction via Delta-Biased Masking in the Presence of a Quantum Attacker: Randomness extraction is of fundamental importance for information-theoretic cryptography. It allows to transform a raw key about which an attacker has some limited knowledge into a fully secure random key, on which the attacker has essentially no information. Up to date, only very few randomness-extraction techniques are known to work against an attacker holding quantum information on the raw key. This is very much in contrast to the classical (non-quantum) setting, which is much better understood and for which a vast amount of different techniques are known and proven to work. We prove a new randomness-extraction technique, which is known to work in the classical setting, to be secure against a quantum attacker as well. Randomness extraction is done by XOR'ing a so-called delta-biased mask to the raw key. Our result allows to extend the classical applications of this extractor to the quantum setting. We discuss the following two applications. We show how to encrypt a long message with a short key, information-theoretically secure against a quantum attacker, provided that the attacker has enough quantum uncertainty on the message. This generalizes the concept of entropically-secure encryption to the case of a quantum attacker. As second application, we show how to do error-correction without leaking partial information to a quantum attacker. Such a technique is useful in settings where the raw key may contain errors, since standard error-correction techniques may provide the attacker with information on, say, a secret key that was used to obtain the raw key.<|reference_end|>
arxiv
@article{fehr2007randomness, title={Randomness Extraction via Delta-Biased Masking in the Presence of a Quantum Attacker}, author={Serge Fehr, Christian Schaffner}, journal={arXiv preprint arXiv:0706.2606}, year={2007}, archivePrefix={arXiv}, eprint={0706.2606}, primaryClass={quant-ph cs.CR} }
fehr2007randomness
arxiv-567
0706.2619
Algorithms for Omega-Regular Games with Imperfect Information
<|reference_start|>Algorithms for Omega-Regular Games with Imperfect Information: We study observation-based strategies for two-player turn-based games on graphs with omega-regular objectives. An observation-based strategy relies on imperfect information about the history of a play, namely, on the past sequence of observations. Such games occur in the synthesis of a controller that does not see the private state of the plant. Our main results are twofold. First, we give a fixed-point algorithm for computing the set of states from which a player can win with a deterministic observation-based strategy for any omega-regular objective. The fixed point is computed in the lattice of antichains of state sets. This algorithm has the advantages of being directed by the objective and of avoiding an explicit subset construction on the game graph. Second, we give an algorithm for computing the set of states from which a player can win with probability 1 with a randomized observation-based strategy for a Buechi objective. This set is of interest because in the absence of perfect information, randomized strategies are more powerful than deterministic ones. We show that our algorithms are optimal by proving matching lower bounds.<|reference_end|>
arxiv
@article{chatterjee2007algorithms, title={Algorithms for Omega-Regular Games with Imperfect Information}, author={Krishnendu Chatterjee, Laurent Doyen, Thomas A. Henzinger,<br> Jean-Francois Raskin}, journal={Logical Methods in Computer Science, Volume 3, Issue 3 (July 27, 2007) lmcs:1094}, year={2007}, doi={10.2168/LMCS-3(3:4)2007}, archivePrefix={arXiv}, eprint={0706.2619}, primaryClass={cs.LO cs.GT} }
chatterjee2007algorithms
arxiv-568
0706.2725
The Complexity of Determining Existence a Hamiltonian Cycle is $O(n^3)$
<|reference_start|>The Complexity of Determining Existence a Hamiltonian Cycle is $O(n^3)$: The Hamiltonian cycle problem in digraph is mapped into a matching cover bipartite graph. Based on this mapping, it is proved that determining existence a Hamiltonian cycle in graph is $O(n^3)$.<|reference_end|>
arxiv
@article{zhu2007the, title={The Complexity of Determining Existence a Hamiltonian Cycle is $O(n^3)$}, author={Guohun Zhu}, journal={arXiv preprint arXiv:0706.2725}, year={2007}, archivePrefix={arXiv}, eprint={0706.2725}, primaryClass={cs.DS cs.CC cs.DM} }
zhu2007the
arxiv-569
0706.2732
A Design Methodology for Space-Time Adapter
<|reference_start|>A Design Methodology for Space-Time Adapter: This paper presents a solution to efficiently explore the design space of communication adapters. In most digital signal processing (DSP) applications, the overall architecture of the system is significantly affected by communication architecture, so the designers need specifically optimized adapters. By explicitly modeling these communications within an effective graph-theoretic model and analysis framework, we automatically generate an optimized architecture, named Space-Time AdapteR (STAR). Our design flow inputs a C description of Input/Output data scheduling, and user requirements (throughput, latency, parallelism...), and formalizes communication constraints through a Resource Constraints Graph (RCG). The RCG properties enable an efficient architecture space exploration in order to synthesize a STAR component. The proposed approach has been tested to design an industrial data mixing block example: an Ultra-Wideband interleaver.<|reference_end|>
arxiv
@article{chavet2007a, title={A Design Methodology for Space-Time Adapter}, author={Cyrille Chavet (LESTER), Philippe Coussy (LESTER), Pascal Urard (STM), Eric Martin (LESTER)}, journal={Proceedings of the 2007 ACM Great Lakes Symposium on VLSI (12/03/2007) 347}, year={2007}, archivePrefix={arXiv}, eprint={0706.2732}, primaryClass={cs.AR} }
chavet2007a
arxiv-570
0706.2746
Abstract Storage Devices
<|reference_start|>Abstract Storage Devices: A quantum storage device differs radically from a conventional physical storage device. Its state can be set to any value in a certain (infinite) state space, but in general every possible read operation yields only partial information about the stored state. The purpose of this paper is to initiate the study of a combinatorial abstraction, called abstract storage device (ASD), which models deterministic storage devices with the property that only partial information about the state can be read, but that there is a degree of freedom as to which partial information should be retrieved. This concept leads to a number of interesting problems which we address, like the reduction of one device to another device, the equivalence of devices, direct products of devices, as well as the factorization of a device into primitive devices. We prove that every ASD has an equivalent ASD with minimal number of states and of possible read operations. Also, we prove that the reducibility problem for ASD's is NP-complete, that the equivalence problem is at least as hard as the graph isomorphism problem, and that the factorization into binary-output devices (if it exists) is unique.<|reference_end|>
arxiv
@article{koenig2007abstract, title={Abstract Storage Devices}, author={Robert Koenig, Ueli Maurer, Stefano Tessaro}, journal={arXiv preprint arXiv:0706.2746}, year={2007}, archivePrefix={arXiv}, eprint={0706.2746}, primaryClass={cs.DM cs.CC cs.IT math.IT} }
koenig2007abstract
arxiv-571
0706.2748
A Survey of Unix Init Schemes
<|reference_start|>A Survey of Unix Init Schemes: In most modern operating systems, init (as in "initialization") is the program launched by the kernel at boot time. It runs as a daemon and typically has PID 1. Init is responsible for spawning all other processes and scavenging zombies. It is also responsible for reboot and shutdown operations. This document describes existing solutions that implement the init process and/or init scripts in Unix-like systems. These solutions range from the legacy and still-in-use BSD and SystemV schemes, to recent and promising schemes from Ubuntu, Apple, Sun and independent developers. Our goal is to highlight their focus and compare their sets of features.<|reference_end|>
arxiv
@article{royon2007a, title={A Survey of Unix Init Schemes}, author={Yvan Royon (INRIA Rh^one-Alpes), St'ephane Fr'enot (INRIA Rh^one-Alpes)}, journal={arXiv preprint arXiv:0706.2748}, year={2007}, archivePrefix={arXiv}, eprint={0706.2748}, primaryClass={cs.OS} }
royon2007a
arxiv-572
0706.2795
Dirty-paper Coding without Channel Information at the Transmitter and Imperfect Estimation at the Receiver
<|reference_start|>Dirty-paper Coding without Channel Information at the Transmitter and Imperfect Estimation at the Receiver: In this paper, we examine the effects of imperfect channel estimation at the receiver and no channel knowledge at the transmitter on the capacity of the fading Costa's channel with channel state information non-causally known at the transmitter. We derive the optimal Dirty-paper coding (DPC) scheme and its corresponding achievable rates with the assumption of Gaussian inputs. Our results, for uncorrelated Rayleigh fading, provide intuitive insights on the impact of the channel estimate and the channel characteristics (e.g. SNR, fading process, channel training) on the achievable rates. These are useful in practical scenarios of multiuser wireless communications (e.g. Broadcast Channels) and information embedding applications (e.g. robust watermarking). We also studied optimal training design adapted to each application. We provide numerical results for a single-user fading Costa's channel with maximum-likehood (ML) channel estimation. These illustrate an interesting practical trade-off between the amount of training and its impact to the interference cancellation performance using DPC scheme.<|reference_end|>
arxiv
@article{piantanida2007dirty-paper, title={Dirty-paper Coding without Channel Information at the Transmitter and Imperfect Estimation at the Receiver}, author={Pablo Piantanida and Pierre Duhamel}, journal={Proc. of IEEE International Conference on Communications (ICC), Glasgow, Scotland, 2007}, year={2007}, archivePrefix={arXiv}, eprint={0706.2795}, primaryClass={cs.IT math.IT} }
piantanida2007dirty-paper
arxiv-573
0706.2797
Extraction d'entit\'es dans des collections \'evolutives
<|reference_start|>Extraction d'entit\'es dans des collections \'evolutives: The goal of our work is to use a set of reports and extract named entities, in our case the names of Industrial or Academic partners. Starting with an initial list of entities, we use a first set of documents to identify syntactic patterns that are then validated in a supervised learning phase on a set of annotated documents. The complete collection is then explored. This approach is similar to the ones used in data extraction from semi-structured documents (wrappers) and do not need any linguistic resources neither a large set for training. As our collection of documents would evolve over years, we hope that the performance of the extraction would improve with the increased size of the training set.<|reference_end|>
arxiv
@article{despeyroux2007extraction, title={Extraction d'entit\'es dans des collections \'evolutives}, author={Thierry Despeyroux (INRIA Rocquencourt / INRIA Sophia Antipolis), Eduardo Fraschini (INRIA Rocquencourt / INRIA Sophia Antipolis), Anne-Marie Vercoustre (INRIA Rocquencourt / INRIA Sophia Antipolis)}, journal={Dans 7i\`emes Journ\'ees francophones Extraction et Gestion des Connaissances EGC 2007 76300 (23/01/2007) pp. 533-538}, year={2007}, archivePrefix={arXiv}, eprint={0706.2797}, primaryClass={cs.IR} }
despeyroux2007extraction
arxiv-574
0706.2809
On the Outage Capacity of a Practical Decoder Using Channel Estimation Accuracy
<|reference_start|>On the Outage Capacity of a Practical Decoder Using Channel Estimation Accuracy: The optimal decoder achieving the outage capacity under imperfect channel estimation is investigated. First, by searching into the family of nearest neighbor decoders, which can be easily implemented on most practical coded modulation systems, we derive a decoding metric that minimizes the average of the transmission error probability over all channel estimation errors. This metric, for arbitrary memoryless channels, achieves the capacity of a composite (more noisy) channel. Next, according to the notion of estimation-induced outage capacity (EIO capacity) introduced in our previous work, we characterize maximal achievable information rates associated to the proposed decoder. The performance of the proposed decoding metric over uncorrelated Rayleigh fading MIMO channels is compared to both the classical mismatched maximum-likelihood (ML) decoder and the theoretical limits given by the EIO capacity (i.e. the best decoder in presence of channel estimation errors). Numerical results show that the derived metric provides significant gains, in terms of achievable information rates and bit error rate (BER), in a bit interleaved coded modulation (BICM) framework, without introducing any additional decoding complexity.<|reference_end|>
arxiv
@article{piantanida2007on, title={On the Outage Capacity of a Practical Decoder Using Channel Estimation Accuracy}, author={Pablo Piantanida, Sajad Sadough and Pierre Duhamel}, journal={Proc. of IEEE International Symposium on Information Theory (ISIT), Nice, France, 2007}, year={2007}, doi={10.1109/ISIT.2007.4557559}, archivePrefix={arXiv}, eprint={0706.2809}, primaryClass={cs.IT math.IT} }
piantanida2007on
arxiv-575
0706.2824
M\'ethodologie de mod\'elisation et d'impl\'ementation d'adaptateurs spatio-temporels
<|reference_start|>M\'ethodologie de mod\'elisation et d'impl\'ementation d'adaptateurs spatio-temporels: The re-use of pre-designed blocks is a well-known concept of the software development. This technique has been applied to System-on-Chip (SoC) design whose complexity and heterogeneity are growing. The re-use is made thanks to high level components, called virtual components (IP), available in more or less flexible forms. These components are dedicated blocks: digital signal processing (DCT, FFT), telecommunications (Viterbi, TurboCodes),... These blocks rest on a model of fixed architecture with very few degrees of personalization. This rigidity is particularly true for the communication interface whose orders of acquisition and production of data, the temporal behavior and protocols of exchanges are fixed. The successful integration of such an IP requires that the designer (1) synchronizes the components (2) converts the protocols between "incompatible" blocks (3) temporizes the data to guarantee the temporal constraints and the order of the data. This phase remains however very manual and source of errors. Our approach proposes a formal modeling, based on an original Ressource Compatibility Graph. The synthesis flow is based on a set of transformations of the initial graph to lead to an interface architecture allowing the space-time adaptation of the data exchanges between several components.<|reference_end|>
arxiv
@article{chavet2007m\'ethodologie, title={M\'ethodologie de mod\'elisation et d'impl\'ementation d'adaptateurs spatio-temporels}, author={Cyrille Chavet (LESTER, STM), Philippe Coussy (LESTER), Pascal Urard (STM), Eric Martin (LESTER)}, journal={Actes de la conference MajecSTIC 2005 (16/11/2005) 151}, year={2007}, archivePrefix={arXiv}, eprint={0706.2824}, primaryClass={cs.AR} }
chavet2007m\'ethodologie
arxiv-576
0706.2839
Cache Analysis of Non-uniform Distribution Sorting Algorithms
<|reference_start|>Cache Analysis of Non-uniform Distribution Sorting Algorithms: We analyse the average-case cache performance of distribution sorting algorithms in the case when keys are independently but not necessarily uniformly distributed. The analysis is for both `in-place' and `out-of-place' distribution sorting algorithms and is more accurate than the analysis presented in \cite{RRESA00}. In particular, this new analysis yields tighter upper and lower bounds when the keys are drawn from a uniform distribution. We use this analysis to tune the performance of the integer sorting algorithm MSB radix sort when it is used to sort independent uniform floating-point numbers (floats). Our tuned MSB radix sort algorithm comfortably outperforms a cache-tuned implementations of bucketsort \cite{RR99} and Quicksort when sorting uniform floats from $[0, 1)$.<|reference_end|>
arxiv
@article{rahman2007cache, title={Cache Analysis of Non-uniform Distribution Sorting Algorithms}, author={Naila Rahman and Rajeev Raman}, journal={arXiv preprint arXiv:0706.2839}, year={2007}, archivePrefix={arXiv}, eprint={0706.2839}, primaryClass={cs.DS cs.PF} }
rahman2007cache
arxiv-577
0706.2888
Variations on Kak's Three Stage Quantum Cryptography Protocol
<|reference_start|>Variations on Kak's Three Stage Quantum Cryptography Protocol: This paper introduces a variation on Kak's three-stage quanutm key distribution protocol which allows for defence against the man in the middle attack. In addition, we introduce a new protocol, which also offers similar resiliance against such an attack.<|reference_end|>
arxiv
@article{thomas2007variations, title={Variations on Kak's Three Stage Quantum Cryptography Protocol}, author={James Harold Thomas}, journal={arXiv preprint arXiv:0706.2888}, year={2007}, archivePrefix={arXiv}, eprint={0706.2888}, primaryClass={cs.CR} }
thomas2007variations
arxiv-578
0706.2893
Dualheap Sort Algorithm: An Inherently Parallel Generalization of Heapsort
<|reference_start|>Dualheap Sort Algorithm: An Inherently Parallel Generalization of Heapsort: A generalization of the heapsort algorithm is proposed. At the expense of about 50% more comparison and move operations for typical cases, the dualheap sort algorithm offers several advantages over heapsort: improved cache performance, better performance if the input happens to be already sorted, and easier parallel implementations.<|reference_end|>
arxiv
@article{sepesi2007dualheap, title={Dualheap Sort Algorithm: An Inherently Parallel Generalization of Heapsort}, author={Greg Sepesi}, journal={arXiv preprint arXiv:0706.2893}, year={2007}, archivePrefix={arXiv}, eprint={0706.2893}, primaryClass={cs.DS cs.CC cs.DC} }
sepesi2007dualheap
arxiv-579
0706.2906
Capacity Scaling for MIMO Two-Way Relaying
<|reference_start|>Capacity Scaling for MIMO Two-Way Relaying: A multiple input multiple output (MIMO) two-way relay channel is considered, where two sources want to exchange messages with each other using multiple relay nodes, and both the sources and relay nodes are equipped with multiple antennas. Both the sources are assumed to have equal number of antennas and have perfect channel state information (CSI) for all the channels of the MIMO two-way relay channel, whereas, each relay node is either assumed to have CSI for its transmit and receive channel (the coherent case) or no CSI for any of the channels (the non-coherent case). The main results in this paper are on the scaling behavior of the capacity region of the MIMO two-way relay channel with increasing number of relay nodes. In the coherent case, the capacity region of the MIMO two-way relay channel is shown to scale linearly with the number of antennas at source nodes and logarithmically with the number of relay nodes. In the non-coherent case, the capacity region is shown to scale linearly with the number of antennas at the source nodes and logarithmically with the signal to noise ratio.<|reference_end|>
arxiv
@article{vaze2007capacity, title={Capacity Scaling for MIMO Two-Way Relaying}, author={Rahul Vaze and Robert W. Heath Jr}, journal={arXiv preprint arXiv:0706.2906}, year={2007}, archivePrefix={arXiv}, eprint={0706.2906}, primaryClass={cs.IT math.IT} }
vaze2007capacity
arxiv-580
0706.2926
Reducing the Error Floor
<|reference_start|>Reducing the Error Floor: We discuss how the loop calculus approach of [Chertkov, Chernyak '06], enhanced by the pseudo-codeword search algorithm of [Chertkov, Stepanov '06] and the facet-guessing idea from [Dimakis, Wainwright '06], improves decoding of graph based codes in the error-floor domain. The utility of the new, Linear Programming based, decoding is demonstrated via analysis and simulations of the model $[155,64,20]$ code.<|reference_end|>
arxiv
@article{chertkov2007reducing, title={Reducing the Error Floor}, author={Michael Chertkov (Los Alamos)}, journal={arXiv preprint arXiv:0706.2926}, year={2007}, number={LA-UR-07-4047}, archivePrefix={arXiv}, eprint={0706.2926}, primaryClass={cs.IT math.IT} }
chertkov2007reducing
arxiv-581
0706.2963
Outage Behavior of Discrete Memoryless Channels Under Channel Estimation Errors
<|reference_start|>Outage Behavior of Discrete Memoryless Channels Under Channel Estimation Errors: Classically, communication systems are designed assuming perfect channel state information at the receiver and/or transmitter. However, in many practical situations, only an estimate of the channel is available that differs from the true channel. We address this channel mismatch scenario by using the notion of estimation-induced outage capacity, for which we provide an associated coding theorem and its strong converse, assuming a discrete memoryless channel. We illustrate our ideas via numerical simulations for transmissions over Ricean fading channels under a quality of service (QoS) constraint using rate-limited feedback channel and maximum likelihood (ML) channel estimation. Our results provide intuitive insights on the impact of the channel estimate and the channel characteristics (SNR, Ricean K-factor, training sequence length, feedback rate, etc.) on the mean outage capacity.<|reference_end|>
arxiv
@article{piantanida2007outage, title={Outage Behavior of Discrete Memoryless Channels Under Channel Estimation Errors}, author={Pablo Piantanida, Gerald Matz and Pierre Duhamel}, journal={Proc. of International Symposium on Information Theory and its Applications (ISITA), Seoul, Korea, 2006, pp. 417-422}, year={2007}, archivePrefix={arXiv}, eprint={0706.2963}, primaryClass={cs.IT math.IT} }
piantanida2007outage
arxiv-582
0706.2974
Remote laboratories: new technology and standard based architecture
<|reference_start|>Remote laboratories: new technology and standard based architecture: E-Laboratories are important components of e- learning environments, especially in scientific and technical disciplines. First widespread E-Labs consisted in proposing simulations of real systems (virtual labs), as building remote labs (remote control of real systems) was difficult by lack of industrial standards and common protocols. Nowadays, robotics and automation technologies make easier the interfacing of systems with computers. In this frame, many researchers (such as those mentioned in [1]) focus on how to set up such a remote control. But, only a few of them deal with the educational point of view of the problem. This paper outlines our current research and reflection about remote laboratory modelling.<|reference_end|>
arxiv
@article{benmohamed2007remote, title={Remote laboratories: new technology and standard based architecture}, author={Hcene Benmohamed (LIESP), Arnaud Leleve (LIESP), Patrick Pr'evot (LIESP)}, journal={Proceedings of 2004 International Conference on Information and Communication Technologies: From Theory to Applications (19/04/2004) 101 - 102}, year={2007}, doi={10.1109/ICTTA.2004.1307634}, archivePrefix={arXiv}, eprint={0706.2974}, primaryClass={cs.OH} }
benmohamed2007remote
arxiv-583
0706.3008
A Generic Deployment Framework for Grid Computing and Distributed Applications
<|reference_start|>A Generic Deployment Framework for Grid Computing and Distributed Applications: Deployment of distributed applications on large systems, and especially on grid infrastructures, becomes a more and more complex task. Grid users spend a lot of time to prepare, install and configure middleware and application binaries on nodes, and eventually start their applications. The problem is that the deployment process is composed of many heterogeneous tasks that have to be orchestrated in a specific correct order. As a consequence, the automatization of the deployment process is currently very difficult to reach. To address this problem, we propose in this paper a generic deployment framework allowing to automatize the execution of heterogeneous tasks composing the whole deployment process. Our approach is based on a reification as software components of all required deployment mechanisms or existing tools. Grid users only have to describe the configuration to deploy in a simple natural language instead of programming or scripting how the deployment process is executed. As a toy example, this framework is used to deploy CORBA component-based applications and OpenCCM middleware on one thousand nodes of the French Grid5000 infrastructure.<|reference_end|>
arxiv
@article{flissi2007a, title={A Generic Deployment Framework for Grid Computing and Distributed Applications}, author={Areski Flissi (LIFL), Philippe Merle (INRIA Futurs)}, journal={OTM 2006, LNCS 4276 (2006) 1402-1411}, year={2007}, doi={10.1007/11914952_26}, archivePrefix={arXiv}, eprint={0706.3008}, primaryClass={cs.DC} }
flissi2007a
arxiv-584
0706.3009
Application of a design space exploration tool to enhance interleaver generation
<|reference_start|>Application of a design space exploration tool to enhance interleaver generation: This paper presents a methodology to efficiently explore the design space of communication adapters. In most digital signal processing (DSP) applications, the overall performance of the system is significantly affected by communication architectures, as a consequence the designers need specifically optimized adapters. By explicitly modeling these communications within an effective graph-theoretic model and analysis framework, we automatically generate an optimized architecture, named Space-Time AdapteR (STAR). Our design flow inputs a C description of Input/Output data scheduling, and user requirements (throughput, latency, parallelism...), and formalizes communication constraints through a Resource Constraints Graph (RCG). Design space exploration is then performed through associated tools, to synthesize a STAR component under time-to-market constraints. The proposed approach has been tested to design an industrial data mixing block example: an Ultra-Wideband interleaver.<|reference_end|>
arxiv
@article{chavet2007application, title={Application of a design space exploration tool to enhance interleaver generation}, author={Cyrille Chavet (LESTER, STM), Philippe Coussy (LESTER), Pascal Urard (STM), Eric Martin (LESTER)}, journal={Proceedings of the European Signal Processing Conference (EUSIPCO-2007) (03/09/2007)}, year={2007}, archivePrefix={arXiv}, eprint={0706.3009}, primaryClass={cs.AR cs.IT math.IT} }
chavet2007application
arxiv-585
0706.3060
N-Body Simulations on GPUs
<|reference_start|>N-Body Simulations on GPUs: Commercial graphics processors (GPUs) have high compute capacity at very low cost, which makes them attractive for general purpose scientific computing. In this paper we show how graphics processors can be used for N-body simulations to obtain improvements in performance over current generation CPUs. We have developed a highly optimized algorithm for performing the O(N^2) force calculations that constitute the major part of stellar and molecular dynamics simulations. In some of the calculations, we achieve sustained performance of nearly 100 GFlops on an ATI X1900XTX. The performance on GPUs is comparable to specialized processors such as GRAPE-6A and MDGRAPE-3, but at a fraction of the cost. Furthermore, the wide availability of GPUs has significant implications for cluster computing and distributed computing efforts like Folding@Home.<|reference_end|>
arxiv
@article{elsen2007n-body, title={N-Body Simulations on GPUs}, author={Erich Elsen, V. Vishal, Mike Houston, Vijay Pande, Pat Hanrahan, Eric Darve}, journal={arXiv preprint arXiv:0706.3060}, year={2007}, archivePrefix={arXiv}, eprint={0706.3060}, primaryClass={cs.CE cs.DC} }
elsen2007n-body
arxiv-586
0706.3076
On the Performance of Joint Fingerprint Embedding and Decryption Scheme
<|reference_start|>On the Performance of Joint Fingerprint Embedding and Decryption Scheme: Till now, few work has been done to analyze the performances of joint fingerprint embedding and decryption schemes. In this paper, the security of the joint fingerprint embedding and decryption scheme proposed by Kundur et al. is analyzed and improved. The analyses include the security against unauthorized customer, the security against authorized customer, the relationship between security and robustness, the relationship between secu-rity and imperceptibility and the perceptual security. Based these analyses, some means are proposed to strengthen the system, such as multi-key encryp-tion and DC coefficient encryption. The method can be used to analyze other JFD schemes. It is expected to provide valuable information to design JFD schemes.<|reference_end|>
arxiv
@article{lian2007on, title={On the Performance of Joint Fingerprint Embedding and Decryption Scheme}, author={Shiguo Lian, Zhongxuan Liu, Zhen Ren, Haila Wang}, journal={arXiv preprint arXiv:0706.3076}, year={2007}, archivePrefix={arXiv}, eprint={0706.3076}, primaryClass={cs.MM cs.CR} }
lian2007on
arxiv-587
0706.3104
Group Testing with Random Pools: optimal two-stage algorithms
<|reference_start|>Group Testing with Random Pools: optimal two-stage algorithms: We study Probabilistic Group Testing of a set of N items each of which is defective with probability p. We focus on the double limit of small defect probability, p<<1, and large number of variables, N>>1, taking either p->0 after $N\to\infty$ or $p=1/N^{\beta}$ with $\beta\in(0,1/2)$. In both settings the optimal number of tests which are required to identify with certainty the defectives via a two-stage procedure, $\bar T(N,p)$, is known to scale as $Np|\log p|$. Here we determine the sharp asymptotic value of $\bar T(N,p)/(Np|\log p|)$ and construct a class of two-stage algorithms over which this optimal value is attained. This is done by choosing a proper bipartite regular graph (of tests and variable nodes) for the first stage of the detection. Furthermore we prove that this optimal value is also attained on average over a random bipartite graph where all variables have the same degree, while the tests have Poisson-distributed degrees. Finally, we improve the existing upper and lower bound for the optimal number of tests in the case $p=1/N^{\beta}$ with $\beta\in[1/2,1)$.<|reference_end|>
arxiv
@article{mezard2007group, title={Group Testing with Random Pools: optimal two-stage algorithms}, author={Marc Mezard, Cristina Toninelli}, journal={arXiv preprint arXiv:0706.3104}, year={2007}, archivePrefix={arXiv}, eprint={0706.3104}, primaryClass={cs.DS cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT} }
mezard2007group
arxiv-588
0706.3129
Closed-Form Density of States and Localization Length for a Non-Hermitian Disordered System
<|reference_start|>Closed-Form Density of States and Localization Length for a Non-Hermitian Disordered System: We calculate the Lyapunov exponent for the non-Hermitian Zakharov-Shabat eigenvalue problem corresponding to the attractive non-linear Schroedinger equation with a Gaussian random pulse as initial value function. Using an extension of the Thouless formula to non-Hermitian random operators, we calculate the corresponding average density of states. We analyze two cases, one with circularly symmetric complex Gaussian pulses and the other with real Gaussian pulses. We discuss the implications in the context of the information transmission through non-linear optical fibers.<|reference_end|>
arxiv
@article{kazakopoulos2007closed-form, title={Closed-Form Density of States and Localization Length for a Non-Hermitian Disordered System}, author={Pavlos Kazakopoulos and Aris L. Moustakas}, journal={arXiv preprint arXiv:0706.3129}, year={2007}, doi={10.1103/PhysRevE.78.016603}, archivePrefix={arXiv}, eprint={0706.3129}, primaryClass={cond-mat.dis-nn cond-mat.stat-mech cs.IT math.IT nlin.SI} }
kazakopoulos2007closed-form
arxiv-589
0706.3132
EasyVoice: Integrating voice synthesis with Skype
<|reference_start|>EasyVoice: Integrating voice synthesis with Skype: This paper presents EasyVoice, a system that integrates voice synthesis with Skype. EasyVoice allows a person with voice disabilities to talk with another person located anywhere in the world, removing an important obstacle that affect these people during a phone or VoIP-based conversation.<|reference_end|>
arxiv
@article{condado2007easyvoice:, title={EasyVoice: Integrating voice synthesis with Skype}, author={Paulo A. Condado and Fernando G. Lobo}, journal={arXiv preprint arXiv:0706.3132}, year={2007}, archivePrefix={arXiv}, eprint={0706.3132}, primaryClass={cs.CY cs.HC} }
condado2007easyvoice:
arxiv-590
0706.3146
WiFi Epidemiology: Can Your Neighbors' Router Make Yours Sick?
<|reference_start|>WiFi Epidemiology: Can Your Neighbors' Router Make Yours Sick?: In densely populated urban areas WiFi routers form a tightly interconnected proximity network that can be exploited as a substrate for the spreading of malware able to launch massive fraudulent attack and affect entire urban areas WiFi networks. In this paper we consider several scenarios for the deployment of malware that spreads solely over the wireless channel of major urban areas in the US. We develop an epidemiological model that takes into consideration prevalent security flaws on these routers. The spread of such a contagion is simulated on real-world data for geo-referenced wireless routers. We uncover a major weakness of WiFi networks in that most of the simulated scenarios show tens of thousands of routers infected in as little time as two weeks, with the majority of the infections occurring in the first 24 to 48 hours. We indicate possible containment and prevention measure to limit the eventual harm of such an attack.<|reference_end|>
arxiv
@article{hu2007wifi, title={WiFi Epidemiology: Can Your Neighbors' Router Make Yours Sick?}, author={Hao Hu, Steven Myers, Vittoria Colizza, Alessandro Vespignani}, journal={Proceedings of the National Academy of Sciences, vol. 106, no. 5, 1318-1323 (2009)}, year={2007}, doi={10.1073/pnas.0811973106}, archivePrefix={arXiv}, eprint={0706.3146}, primaryClass={cs.CR physics.soc-ph} }
hu2007wifi
arxiv-591
0706.3159
Une s\'emantique observationnelle du mod\`ele des bo\^ites pour la r\'esolution de programmes logiques (version \'etendue)
<|reference_start|>Une s\'emantique observationnelle du mod\`ele des bo\^ites pour la r\'esolution de programmes logiques (version \'etendue): This report specifies an observational semantics and gives an original presentation of the Byrd's box model. The approach accounts for the semantics of Prolog tracers independently of a particular implementation. Traces are, in general, considered as rather obscure and difficult to use. The proposed formal presentation of a trace constitutes a simple and pedagogical approach for teaching Prolog or for implementing Prolog tracers. It constitutes a form of declarative specification for the tracers. Our approach highlights qualities of the box model which made its success, but also its drawbacks and limits. As a matter of fact, the presented semantics is only one example to illustrate general problems relating to tracers and observing processes. Observing processes know, from observed processes, only their traces. The issue is then to be able to reconstitute by the sole analysis of the trace the main part of the observed process, and if possible, without any loss of information.<|reference_end|>
arxiv
@article{deransart2007une, title={Une s\'emantique observationnelle du mod\`ele des bo\^ites pour la r\'esolution de programmes logiques (version \'etendue)}, author={Pierre Deransart (INRIA Rocquencourt), Mireille Ducass'e (IRISA), G'erard Ferrand (LIFO)}, journal={arXiv preprint arXiv:0706.3159}, year={2007}, archivePrefix={arXiv}, eprint={0706.3159}, primaryClass={cs.PL cs.SE} }
deransart2007une
arxiv-592
0706.3165
A solution for actors' viewpoints representation with collaborative product development
<|reference_start|>A solution for actors' viewpoints representation with collaborative product development: As product complexity and marketing competition increase, a collaborative product development is necessary for companies which develop high quality products in short lead-times. To support product actors from different fields, disciplines, and locations, wishing to exchange and share information, the representation of the actors' viewpoints is the underlying requirement of the collaborative product development. The actors' viewpoints approach was designed to provide an organisational framework following the actors' perspectives in the collaboration, and their relationships, could be explicitly gathered and formatted. The approach acknowledges the inevitability of multiple integration of product information as different views, promotes gathering of actors' interests, and encourages retrieved adequate information while providing support for integration through PLM and/or SCM collaboration. In this paper, a solution for neutral viewpoints representation is proposed. The product, process, and organisation information models are seriatim discussed. A series of issues referring to the viewpoints representation are discussed in detail. Based on XML standard, taking cyclone vessel as an example, an application case of part of product information modelling is stated.<|reference_end|>
arxiv
@article{geryville2007a, title={A solution for actors' viewpoints representation with collaborative product development}, author={Hichem Geryville (LIESP), Abdelaziz Bouras (LIESP), Yacine Ouzrout (LIESP), Nikolaos Sapidis}, journal={Research in Interactive Design (2007) 39-40}, year={2007}, doi={10.1007/978-2-287-48370-7}, archivePrefix={arXiv}, eprint={0706.3165}, primaryClass={cs.HC} }
geryville2007a
arxiv-593
0706.3170
Asymptotic Analysis of General Multiuser Detectors in MIMO DS-CDMA Channels
<|reference_start|>Asymptotic Analysis of General Multiuser Detectors in MIMO DS-CDMA Channels: We analyze a MIMO DS-CDMA channel with a general multiuser detector including a nonlinear multiuser detector, using the replica method. In the many-user, limit the MIMO DS-CDMA channel with the multiuser detector is decoupled into a bank of single-user SIMO Gaussian channels if a spatial spreading scheme is employed. On the other hand, it is decoupled into a bank of single-user MIMO Gaussian channels if a spatial spreading scheme is not employed. The spectral efficiency of the MIMO DS-CDMA channel with the spatial spreading scheme is comparable with that of the MIMO DS-CDMA channel using an optimal space-time block code without the spatial spreading scheme. In the case of the QPSK data modulation scheme the spectral efficiency of the MIMO DS-CDMA channel with the MMSE detector shows {\it waterfall} behavior and is very close to the corresponding sum capacity when the system load is just below the transition point of the {\it waterfall} behavior. Our result implies that the performance of a multiuser detector taking the data modulation scheme into consideration can be far superior to that of linear multiuser detectors.<|reference_end|>
arxiv
@article{takeuchi2007asymptotic, title={Asymptotic Analysis of General Multiuser Detectors in MIMO DS-CDMA Channels}, author={Keigo Takeuchi, Toshiyuki Tanaka, and Toru Yano}, journal={arXiv preprint arXiv:0706.3170}, year={2007}, doi={10.1109/JSAC.2008.080407}, archivePrefix={arXiv}, eprint={0706.3170}, primaryClass={cs.IT math.IT} }
takeuchi2007asymptotic
arxiv-594
0706.3188
A tutorial on conformal prediction
<|reference_start|>A tutorial on conformal prediction: Conformal prediction uses past experience to determine precise levels of confidence in new predictions. Given an error probability $\epsilon$, together with a method that makes a prediction $\hat{y}$ of a label $y$, it produces a set of labels, typically containing $\hat{y}$, that also contains $y$ with probability $1-\epsilon$. Conformal prediction can be applied to any method for producing $\hat{y}$: a nearest-neighbor method, a support-vector machine, ridge regression, etc. Conformal prediction is designed for an on-line setting in which labels are predicted successively, each one being revealed before the next is predicted. The most novel and valuable feature of conformal prediction is that if the successive examples are sampled independently from the same distribution, then the successive predictions will be right $1-\epsilon$ of the time, even though they are based on an accumulating dataset rather than on independent datasets. In addition to the model under which successive examples are sampled independently, other on-line compression models can also use conformal prediction. The widely used Gaussian linear model is one of these. This tutorial presents a self-contained account of the theory of conformal prediction and works through several numerical examples. A more comprehensive treatment of the topic is provided in "Algorithmic Learning in a Random World", by Vladimir Vovk, Alex Gammerman, and Glenn Shafer (Springer, 2005).<|reference_end|>
arxiv
@article{shafer2007a, title={A tutorial on conformal prediction}, author={Glenn Shafer and Vladimir Vovk}, journal={Journal of Machine Learning Research 9 (2008) 371-421. http://www.jmlr.org/papers/v9/shafer08a.html}, year={2007}, archivePrefix={arXiv}, eprint={0706.3188}, primaryClass={cs.LG stat.ML} }
shafer2007a
arxiv-595
0706.3295
Lower bounds on the minimum average distance of binary codes
<|reference_start|>Lower bounds on the minimum average distance of binary codes: New lower bounds on the minimum average Hamming distance of binary codes are derived. The bounds are obtained using linear programming approach.<|reference_end|>
arxiv
@article{mounits2007lower, title={Lower bounds on the minimum average distance of binary codes}, author={Beniamin Mounits}, journal={arXiv preprint arXiv:0706.3295}, year={2007}, archivePrefix={arXiv}, eprint={0706.3295}, primaryClass={cs.IT math.CO math.IT} }
mounits2007lower
arxiv-596
0706.3305
A simple generalization of the ElGamal cryptosystem to non-abelian groups II
<|reference_start|>A simple generalization of the ElGamal cryptosystem to non-abelian groups II: This is a study of the MOR cryptosystem using the special linear group over finite fields. The automorphism group of the special linear group is analyzed for this purpose. At our current state of knowledge, I show that the MOR cryptosystem has better security than the ElGamal cryptosystem over finite fields.<|reference_end|>
arxiv
@article{mahalanobis2007a, title={A simple generalization of the ElGamal cryptosystem to non-abelian groups II}, author={Ayan Mahalanobis}, journal={arXiv preprint arXiv:0706.3305}, year={2007}, archivePrefix={arXiv}, eprint={0706.3305}, primaryClass={cs.CR math.GR} }
mahalanobis2007a
arxiv-597
0706.3341
A Sequent Calculus for Modelling Interferences
<|reference_start|>A Sequent Calculus for Modelling Interferences: A logic calculus is presented that is a conservative extension of linear logic. The motivation beneath this work concerns lazy evaluation, true concurrency and interferences in proof search. The calculus includes two new connectives to deal with multisequent structures and has the cut-elimination property. Extensions are proposed that give first results concerning our objectives.<|reference_end|>
arxiv
@article{fouqueré2007a, title={A Sequent Calculus for Modelling Interferences}, author={Christophe Fouquer'e (LIPN)}, journal={arXiv preprint arXiv:0706.3341}, year={2007}, archivePrefix={arXiv}, eprint={0706.3341}, primaryClass={cs.LO} }
fouqueré2007a
arxiv-598
0706.3350
Optimal Replica Placement in Tree Networks with QoS and Bandwidth Constraints and the Closest Allocation Policy
<|reference_start|>Optimal Replica Placement in Tree Networks with QoS and Bandwidth Constraints and the Closest Allocation Policy: This paper deals with the replica placement problem on fully homogeneous tree networks known as the Replica Placement optimization problem. The client requests are known beforehand, while the number and location of the servers are to be determined. We investigate the latter problem using the Closest access policy when adding QoS and bandwidth constraints. We propose an optimal algorithm in two passes using dynamic programming.<|reference_end|>
arxiv
@article{rehn-sonigo2007optimal, title={Optimal Replica Placement in Tree Networks with QoS and Bandwidth Constraints and the Closest Allocation Policy}, author={Veronika Rehn-Sonigo (INRIA Rh^one-Alpes, LIP)}, journal={arXiv preprint arXiv:0706.3350}, year={2007}, archivePrefix={arXiv}, eprint={0706.3350}, primaryClass={cs.DC} }
rehn-sonigo2007optimal
arxiv-599
0706.3412
On Canonical Forms of Complete Problems via First-order Projections
<|reference_start|>On Canonical Forms of Complete Problems via First-order Projections: The class of problems complete for NP via first-order reductions is known to be characterized by existential second-order sentences of a fixed form. All such sentences are built around the so-called generalized IS-form of the sentence that defines Independent-Set. This result can also be understood as that every sentence that defines a NP-complete problem P can be decomposed in two disjuncts such that the first one characterizes a fragment of P as hard as Independent-Set and the second the rest of P. That is, a decomposition that divides every such sentence into a quotient and residue modulo Independent-Set. In this paper, we show that this result can be generalized over a wide collection of complexity classes, including the so-called nice classes. Moreover, we show that such decomposition can be done for any complete problem with respect to the given class, and that two such decompositions are non-equivalent in general. Interestingly, our results are based on simple and well-known properties of first-order reductions.ow that this result can be generalized over a wide collection of complexity classes, including the so-called nice classes. Moreover, we show that such decomposition can be done for any complete problem with respect to the given class, and that two such decompositions are non-equivalent in general. Interestingly, our results are based on simple and well-known properties of first-order reductions.<|reference_end|>
arxiv
@article{borges2007on, title={On Canonical Forms of Complete Problems via First-order Projections}, author={Nerio Borges, Blai Bonet}, journal={arXiv preprint arXiv:0706.3412}, year={2007}, archivePrefix={arXiv}, eprint={0706.3412}, primaryClass={cs.CC} }
borges2007on
arxiv-600
0706.3430
The Impact of Channel Feedback on Opportunistic Relay Selection for Hybrid-ARQ in Wireless Networks
<|reference_start|>The Impact of Channel Feedback on Opportunistic Relay Selection for Hybrid-ARQ in Wireless Networks: This paper presents a decentralized relay selection protocol for a dense wireless network and describes channel feedback strategies that improve its performance. The proposed selection protocol supports hybrid automatic-repeat-request transmission where relays forward parity information to the destination in the event of a decoding error. Channel feedback is employed for refining the relay selection process and for selecting an appropriate transmission mode in a proposed adaptive modulation transmission framework. An approximation of the throughput of the proposed adaptive modulation strategy is presented, and the dependence of the throughput on system parameters such as the relay contention probability and the adaptive modulation switching point is illustrated via maximization of this approximation. Simulations show that the throughput of the proposed selection strategy is comparable to that yielded by a centralized selection approach that relies on geographic information.<|reference_end|>
arxiv
@article{lo2007the, title={The Impact of Channel Feedback on Opportunistic Relay Selection for Hybrid-ARQ in Wireless Networks}, author={Caleb K. Lo, Robert W. Heath Jr. and Sriram Vishwanath}, journal={arXiv preprint arXiv:0706.3430}, year={2007}, doi={10.1109/TVT.2008.928896}, archivePrefix={arXiv}, eprint={0706.3430}, primaryClass={cs.IT math.IT} }
lo2007the