abstract
stringlengths
7
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
5
1,000k
This paper shows how to compute the nonholonomic distance between a pointwise car-like robot and polygonal obstacles. Geometric constructions to compute the shortest paths from a configuration (given orientation and position in the plane of the robot) to a position (i.e., a configuration with unspecified final orientation) are first presented. The geometric structure of the reachable set (set of points in the plane reachable by paths of given length /spl Lscr/) is then used to compute the shortest paths to straight-line segments. Obstacle distance is defined as the length of such shortest paths. The algorithms are developed for robots that can move both forward and backward (Reeds and Shepp's car) or only forward (Dubins' car). They are based on the convexity analysis of the reachable set.
['Marilena Vendittelli', 'Jean-Paul Laumond', 'Carole Nissoux']
Obstacle distance for car-like robots
427,482
Trigrams'n'Tags (TnT) is an efficient statistical part-of-speech tagger. Contrary to claims found elsewhere in the literature, we argue that a tagger based on Markov models performs at least as well as other current approaches, including the Maximum Entropy framework. A recent comparison has even shown that TnT performs significantly better for the tested corpora. We describe the basic model of TnT, the techniques used for smoothing and for handling unknown words. Furthermore, we present evaluations on two corpora.
['Thorsten Brants']
TnT -- A Statistical Part-of-Speech Tagger
179,586
Most research on legacy user interface migration has adopted code understanding as the means for system modeling and reverse engineering. The methodological assumption underlying the CELLEST project is that the purpose of system migration is to enable, and possibly optimize, its current uses on a new platform. This is why CELLEST uses traces of the system-user interaction to reverse engineer the legacy interface, extract its current uses and generate GUIs on new platforms as wrappers for it.
['Eleni Stroulia', 'Mohammad El-Ramly', 'Paul G. Sorenson', 'Roland Penner']
Legacy systems migration in CelLEST
432,858
The integration of semantic representation and retrieval technologies into mainstream Web applications depends on the ease of adoption and re-use of existing information and meta-data. While for textual information, analysis of the text content is quite standardized, for multimedia resources many archives and repositories resorted do define different metadata representations, creating obstacles to interoperability. The flexibility of RDF allows representing and managing all kind of meta-data and of semantic information about multimedia resources, and this paper proposes a possible strategy, based on a structure called RDF Descriptor, that allows representing, reconciling and semantically tagging multimedia resources of different media formats, and possibly coming from different sources with different representations. Experimental results show the feasibility of the approach by reporting result on cross-media and cross-archive semantic searches.
['Dario Bonino', 'Fulvio Corno', 'Paolo Pellegrino']
Versatile RDF Representation for Multimedia Semantic Search
117,213
This paper presents the development of a single-phase AC power source. The power source described is capable of providing a stable AC voltage with variable amplitude and variable frequency over a wide range. Moreover, it can generate various high-quality low-frequency arbitrary waveforms. This is achieved by applying advanced digital control techniques to perform closed-loop control of the single-phase inverter used in the circuit. In this system, a high-performance digital signal processor has been used to realize the discrete-time sliding-mode control strategy. This leads to a system that is robust with respect to system parameter variations and external disturbances. Consequently, high-quality output waveforms can be maintained despite dynamic loading conditions. Some experimental results are demonstrated.
['Kay-Soon Low']
A DSP-based variable AC power source
441,490
ASICS: Authenticated Key Exchange Security Incorporating Certification Systems.
['Colin Boyd', 'Cas Cremers', 'Michèle Feltz', 'Kenneth G. Paterson', 'Bertram Poettering', 'Douglas Stebila']
ASICS: Authenticated Key Exchange Security Incorporating Certification Systems.
764,571
Artificial Bee Colony (ABC) algorithm is one of the most recently introduced swarm intelligence algorithms which inspired by the foraging behavior of honey bee swarms. It has been widely used in numerical and engineering optimization problems. This paper presents a hybrid artificial bee colony (HABC) model to improve the canonical ABC algorithm. The main idea of HABC is to enhance the information exchange between bees by introducing the crossover operator of genetic algorithm to ABC. With suitable crossover operation, valuable information is fully utilized and it is expected that the algorithm can converge faster and more accurate. Eight versions of HABC algorithm combined by different selection and crossover methods under the model were proposed and tested on several benchmark functions. Then, the settings of the new parameter crossover rate for two well performed HABC versions are tested to verify their best values. Finally, four rotated functions and four shifted functions are used to test the performance of the two algorithms on complex functions and asymmetric functions. Experiment results showed that these two versions of HABC algorithm offer significant improvement over the original ABC and are superior to other two state of the art algorithms on some functions.
['Xiaohui Yan', 'Yunlong Zhu', 'Hanning Chen', 'Hao Zhang']
A novel hybrid artificial bee colony algorithm with crossover operator for numerical optimization
289,086
The capability to easily find useful services (software applications, software components, scientific computations) becomes increasingly critical in several fields. Current approaches for services retrieval are mostly limited to the matching of their inputs/outputs. Recent works have demonstrated that this approach is not sufficient to discover relevant components. In this paper we argue that, in many situations, the service discovery should be based on the specification of service behavior (in particular, the conversation protocol). The idea behind is to develop matching techniques that operate on behavior models and allow delivery of partial matches and evaluation of semantic distance between these matches and the user requirements. Consequently, even if a service satisfying exactly the user requirements does not exist, the most similar ones will be retrieved and proposed for reuse by extension or modification. To do so, we reduce the problem of behavioral matching to a graph matching problem and we adapt existing algorithms for this purpose. A prototype is presented (available as a web service) which takes as input two conversation protocols and evaluates the semantic distance between them; the prototype provides also the script of edit operations that can be used to alter the first model to render it identical with the second one.
['Daniela Grigori', 'Juan Carlos Corrales', 'Mokrane Bouzeghoub']
Behavioral matchmaking for service retrieval
394,805
A knapsack packing neural network of 4n units with both low-order and conjunctive asymmetric synapses is derived from a non-Hamiltonian energy function. Parallel simulations of randomly generated problems of size n in (5, 10, 20) are used to compare network solutions with those of simple greedy fast parallel enumerative algorithms. >
['Benjamin J. Hellstrom', 'Laveen N. Kanal']
Knapsack packing networks
446,455
We consider an efficient computational framework for speeding up several machine learning algorithms with almost no loss of accuracy. The proposed framework relies on projections via structured matrices that we call Structured Spinners, which are formed as products of three structured matrix-blocks that incorporate rotations. The approach is highly generic, i.e. i) structured matrices under consideration can either be fully-randomized or learned, ii) our structured family contains as special cases all previously considered structured schemes, iii) the setting extends to the non-linear case where the projections are followed by non-linear functions, and iv) the method finds numerous applications including kernel approximations via random feature maps, dimensionality reduction algorithms, new fast cross-polytope LSH techniques, deep learning, convex optimization algorithms via Newton sketches, quantization with random projection trees, and more. The proposed framework comes with theoretical guarantees characterizing the capacity of the structured model in reference to its unstructured counterpart and is based on a general theoretical principle that we describe in the paper. As a consequence of our theoretical analysis, we provide the first theoretical guarantees for one of the most efficient existing LSH algorithms based on the HD3HD2HD1 structured matrix [Andoni et al., 2015]. The exhaustive experimental evaluation confirms the accuracy and efficiency of structured spinners for a variety of different applications.
['Mariusz Bojarski', 'Anna Choromanska', 'Krzysztof Choromanski', 'Francois Fagan', 'Cédric Gouy-Pailler', 'Anne Morvan', 'Nourhan Sakr', 'Tamás Sarlós', 'Jamal Atif']
Structured adaptive and random spinners for fast machine learning computations
910,153
Terrain classification is an important topic in polarimetric synthetic aperture radar (PolSAR) image processing. Among various classification techniques, the stacked sparse autoencoder (SSAE) is a kind of deep learning method that can automatically learn useful features layer by layer in an unsupervised manner. However, the scattering measurements of individual pixels in PolSAR images are affected by the speckle; hence, the performance of pixel-based classification approaches would be poor. In this situation, a novel framework is proposed to learn robust features of PolSAR data. The local spatial information is introduced into SSAE to learn the deep spatial sparse features automatically for the first time. Furthermore, the influences of the neighbor pixels on the central pixel are controlled depending on the spatial distances from the neighbor pixels to the central pixel. Experimental results with fully PolSAR data indicate that the proposed method provides a competitive solution.
['Lu Zhang', 'Wenping Ma', 'Dan Zhang']
Stacked Sparse Autoencoder in PolSAR Data Classification Using Local Spatial Information
851,176
Empirical Studies in Decision Rule-Based Flexibility Analysis for Complex Systems Design and Management
['Michel-Alexandre Cardin', 'Yixin Jiang', 'Terence Lim']
Empirical Studies in Decision Rule-Based Flexibility Analysis for Complex Systems Design and Management
952,840
Capacity Management in Global Networking project
['Sylvie Allamagny', 'S. Civanlar', 'Arik N. Kashper']
Capacity Management in Global Networking project
199,139
This paper describes a power spectrum analyzer whose bandwidth is not limited by the mean sampling time. The procedure is based on the estimation of the spectral components of the autocorrelation function of the input signal through the simultaneous random sampling of the given input signal and its randomly "delayed copy". The samples are therefore randomly taken in a double-dimension space, time, and delay. By using a random process in the time domain with a recursive mean previously introduced by the authors in order to avoid any bandwidth limitation due to the sampling strategy, it is shown both theoretically and through simulation that the estimate of the power spectral components is asymptotically unbiased on the unique hypothesis of a synchronized random sampling in the delay domain, i.e., the sampling delays are uniformly distributed in an interval equal to the period of the input signal. The simulation results confirm the theoretical findings.
['Domenico Mirri', 'Gaetano Iuculano', 'Gaetano Pasini', 'F. Filicori', 'Lorenzo Peretto']
A broad-band power spectrum analyzer based on twin-channel delayed sampling
60,489
For video summarization and retrieval, one of the important modules is to group temporal-spatial coherent shots into high-level semantic video clips namely scene segmentation. In this paper, we propose a novel scene segmentation and categorization approach using normalized graph cuts(NCuts). Starting from a set of shots, we first calculate shot similarity from shot key frames. Then by modeling scene segmentation as a graph partition problem where each node is a shot and the weight of edge represents the similarity between two shots, we employ NCuts to find the optimal scene segmentation and automatically decide the optimum scene number by Q function. To discover more useful information from scenes, we analyze the temporal layout patterns of shots, and automatically categorize scenes into two different types, i.e. parallel event scenes and serial event scenes. Extensive experiments are tested on movie, and TV series. The promising results demonstrate that the proposed NCuts based scene segmentation and categorization methods are effective in practice.
['Yanjun Zhao', 'Tao Wang', 'Peng Wang', 'Wei Hu', 'Yangzhou Du', 'Yimin Zhang', 'Guangyou Xu']
Scene Segmentation and Categorization Using NCuts
257,306
The increased capacity and availability of Internet has led to the prosperity of applications. Internet traffic characterization and identification is important for network management. In this paper, based on detailed flow data collected from the public network of Internet Service Provider, the flow graph is constructed to model the interactions among users. Considering traffic from different applications, community structure of the flow graph is analyzed. The near linear time community detection algorithm in complex networks, the label propagation algorithm (LPA) is extended to the flow graph for traffic identification. Experimental results show that the proposed algorithm works well in accuracy and efficiency.
['Ke Yu', 'Xinyu Zhang', 'Jiaxi Di', 'Xiaofei Wu']
Internet traffic identification based on community detection by label propagation
924,340
Identifying effective initiators in OSNs: from the spectral radius perspective
['Songjun Ma', 'Ge Chen', 'Weijie Wu', 'Li Song', 'Xiaohua Tian', 'Xinbing Wang']
Identifying effective initiators in OSNs: from the spectral radius perspective
961,146
We present opportunistic renewal, a lease management protocol designed to keep distributed file systems or distributed shared memories consistent in the presence of a network partition or other computer failures. Our treatment includes an analytical model of the protocol that compares performance with existing lease protocols and quantifies improvements. In addition, this analytical model provides the structure to understand message overhead and availability trade-offs when selecting lease parameters. We include results demonstrating that opportunistic renewal substantially reduces the network overhead associated with lease renewal. As a corollary, opportunistic renewal can reduce the lease length at any given network overhead; e.g., by a factor of 50 at 1% network overhead. Lower overhead makes leasing less intrusive and shorter lease periods allow a system to recover from failure more quickly.
['Randal C. Burns', 'Robert M. Rees', 'Darrell D. E. Long']
An analytical study of opportunistic lease renewal
247,819
Experiences in Using Patterns to Support Process Experts in Wizard Creation.
['Birgit Zimmermann', 'Christoph Rensing', 'Ralf Steinmetz']
Experiences in Using Patterns to Support Process Experts in Wizard Creation.
762,721
We propose a dynamic spatial augmented reality (SAR) system with effective machine learning techniques and edge-based object tracking. Real-time 3D pose estimation is the significant problem of projecting images on moving objects. However, camera-based feature detection is difficult, because most targets have a texture-less surface. Image projection and projected images also interfere with detection. Obtaining 3D shape information with stereo-paired cameras [Resch et al. 2016] is still a time-consuming process, and using a depth sensor with IR [Koizumi et al. 2015] is still unstable and have a fatal time-delay for the dynamic SAR. Therefore, we quickly and robustly estimate the 3D pose of the target objects by using effective machine learning with IR images. And by the combined use of high-speed edge-based object tracking, we realize a stable and low-delay SAR for moving objects.
['Naoki Hashimoto', 'Daisuke Kobayashi']
Dynamic spatial augmented reality with a single IR camera
826,158
One role of a business system is to provide a representation of a Universe of Discourse, which reflects its structure and behaviour. An equally important function of the system is to support communication within an organisation, by structuring and co-ordinating the actions performed by the organisation's agents. These two roles of a business system may be represented in terms of business and process models, i.e. separating the declarative aspects from the procedural control flow aspects of the system. Although this separation of concerns has many advantages, the differences in representation techniques and focus of the two model types constitute a problem in itself. Abstracting business semantics out of, for instance, technical messaging protocols poses severe problems for business analysts. The main contribution of this paper is a unified framework based on agent oriented concepts for facilitating analysis and integration of business models and process models in a systematic way. The approach suggested bridges the gap between the declarative and social/ economic aspects of a business model and the procedural and communicative aspects of a process model in a technology independent manner. We illustrate how our approach can simplify business and process models integration, process specification, process pattern interpretation and process choreography.
['Maria Bergholtz', 'Prasad Jayaweera', 'Paul Johannesson', 'Petia Wohed']
Modelling institutional, communicative and physical domains in agent oriented information systems
549,892
To develop an ASIP (application specific instruction set processor), development of HW (hardware) and development of SWDE (software development environments) are required. Separate develops of HW and SWDE in a short time are difficult. So HW/SW codesign system is necessary rapid develop of ASIPs. We have developed C-DASH(c-like design automation shell), which is a HW/SW codesign system for designing processors based on ISA (instruction set architecture). We describe the HW/SW codesign system C-DASH, along with a description of a Java processor that directly executes Java byte code is given as an example.
['Hideaki Yanagisawa', 'Minoru Uehara', 'Hideki Mori']
Development methodology of ASIP based on Java byte code using HW/SW co-design system for processor design
347,998
Story Link Detection based on Dynamic Information Extending.
['Xiaoyan Zhang', 'Ting Wang', 'Huowang Chen']
Story Link Detection based on Dynamic Information Extending.
808,875
Two highly visible public communication networks are the public-switched telephone network (PSTN) and the Internet. While they typically provide different services and the basic technologies underneath are different, both networks rely heavily on routing for communication between any two points. In this paper, we present a brief overview of routing mechanisms used in the PSTN and the Internet from a historical perspective. In particular, we discuss the role of management for the different routing mechanisms, where and how one is similar or different from the other, as well as where the management aspect is heading in an inter-networked environment of the PSTN and the Internet where voice over IP (VoIP) services are offered.
['Deep Medhi']
Routing Management in the PSTN and the Internet: A Historical Perspective
369,235
Split fabrication, the process of splitting an IC into an untrusted and trusted tier, facilitates access to the most advanced semiconductor manufacturing capabilities available in the world without requiring disclosure of design intent. While obfuscation techniques have been proposed to prevent malicious circuit insertion or modifications in the untrusted tier, detecting a pernicious reliability attack induced in the offshore foundry is more elusive. We describe a methodology for exhaustive testing of components in the untrusted tier using a specialized test-only metal stack for selected sacrificial dies.
['Kaushik Vaidyanathan', 'Bishnu Prasad Das', 'Lawrence T. Pileggi']
Detecting Reliability Attacks during Split Fabrication using Test-only BEOL Stack
242,971
Polarimetric SAR images are a rich data source for crop mapping. However, quad-pol sensors have some limitations due to their complexity, increased data rate, and reduced coverage and revisit time. The main objective of this study was to evaluate the added value of quad-pol data in a multi-temporal crop classification framework based on SAR imagery. With this aim, three RADARSAT-2 scenes were acquired between May and June 2010. Once we analyzed the separability and the descriptive analysis of the features, an object-based supervised classification was performed using the Random Forests classification algorithm. Classification results obtained with dual-pol (VV-VH) data as input were compared to those using quad-pol data in different polarization bases (linear H-V, circular, and linear 45°), and also to configurations where several polarimetric features (Pauli and Cloude–Pottier decomposition features and co-pol coherence and phase difference) were added. Dual-pol data obtained satisfactory results, equal to those obtained with quad-pol data (in H-V basis) in terms of overall accuracy (0.79) and Kappa values (0.69). Quad-pol data in circular and linear 45° bases resulted in lower accuracies. The inclusion of polarimetric features, particularly co-pol coherence and phase difference, resulted in enhanced classification accuracies with an overall accuracy of 0.86 and Kappa of 0.79 in the best case, when all the polarimetric features were added. Improvements were also observed in the identification of some particular crops, but major crops like cereals, rapeseed, and sunflower already achieved a satisfactory accuracy with the VV-VH dual-pol configuration and obtained only minor improvements. Therefore, it can be concluded that C-band VV-VH dual-pol data is almost ready to be used operationally for crop mapping as long as at least three acquisitions in dates reflecting key growth stages representing typical phenology differences of the present crops are available. In the near future, issues regarding the classification of crops with small field sizes and heterogeneous cover (i.e., fallow and grasslands) need to be tackled to make this application fully operational.
['Arantzazu Larrañaga', 'Jesús Álvarez-Mozos']
On the added value of quad-pol data in a multi-temporal crop classification framework based on RADARSAT-2 imagery
708,362
In this paper, we propose a coupled level set (LS) framework for segmentation of bladder wall using T 1 -weighted magnetic resonance (MR) images with clinical applications to virtual cystoscopy (i.e., MR cystography). The framework uses two collaborative LS functions and a regional adaptive clustering algorithm to delineate the bladder wall for the wall thickness measurement on a voxel-by-voxel basis. It is significantly different from most of the pre-existing bladder segmentation work in four aspects. First of all, while most previous work only segments the inner border of the wall or at most manually segments the outer border, our framework extracts both the inner and outer borders automatically except that the initial seed point is given by manual selection. Secondly, it is adaptive to T 1 -weighted images with decreased intensities in urine, as opposed to enhanced intensities in T 2 -weighted scenario and computed tomography. Thirdly, by considering the image global intensity distribution and local intensity contrast, the defined image energy function in the framework is more immune to inhomogeneity effect, motion artifacts and image noise. Finally, the bladder wall thickness is measured by the length of integral path between the two borders which mimic the electric field line between two iso-potential surfaces. The framework was tested on six datasets with comparison to the well-known Chan-Vese (C-V) LS model. Five experts blindly scored the segmented inner and outer borders of the presented framework and the C-V model. The scores demonstrated statistically the improvement in detecting the inner and outer borders.
['Chaijie Duan', 'Zhengrong Liang', 'Shangliang Bao', 'Hongbin Zhu', 'Su Wang', 'Guangxiang Zhang', 'John J. Chen', 'Hongbing Lu']
A Coupled Level Set Framework for Bladder Wall Segmentation With Application to MR Cystography
396,630
Personalized recommendation in e-learning has attracted the interest of many researchers. How to select the proper learning objects (LOs) and provide a suitable learning path for learners is a complex task. The effectiveness of personalized recommender systems are mostly decided by the reasonable models of learners and learning resources. However, the modeling method needs further research for the learners' special natures in e-learning. Heuristic methods have achieved significant successes on personalized recommendation, but the operators of some heuristic algorithms are often fixed, which diminishes the algorithms' extendibility. In this paper, we propose a learner oriented recommendation approach based on mixed concept mapping and immune algorithm (IA). First, we build universal models for learners and LOs respectively, then apply mixed concept mapping to assimilate their attributes. Second, we model the learner oriented recommendation as a constraint satisfaction problem (CSP) which aims to minimize the penalty function of unsatisfied indexes. Last, we propose an advanced IA which takes the inherent characteristics of personalized recommendation into consideration, and we design the monomer vaccine and block vaccine to optimize the IA. Our approach is compared with other heuristic algorithms and traditional teaching method. From the experimental results, it can be concluded that the proposed approach shows high adaptability and efficiency in e-learning recommendation.
['Shanshan Wan', 'Zhendong Niu']
A learner oriented learning recommendation approach based on mixed concept mapping and immune algorithm
709,740
This work presents an analysis of Papoulis' (1977) generalized sampling expansion (GSE) for a wide-sense stationary signal with a known power spectrum in the presence of quantization noise. We find the necessary and sufficient conditions for a GSE system to produce the minimum mean squared error while using the optimal linear estimation filter. This is actually an extension of the optimal linear equalizer (linear source/channel optimization) to the case of M parallel channels.
['Daniel Seidner', 'Meir Feder']
Optimal generalized sampling expansion
502,611
Annotating functional correctness properties of code using assertions, in principle, enables systematic checking of code against behavioral properties. In practice however, checking assertions can be costly, especially for complex code annotated with rich behavioral properties. This paper introduces a novel approach for distributing the problem of checking assertions for better scalability. Leveraging that assertions should be side effect free, our approach distributes assertion checking into simpler sub-problems---each focusing on checking one single assertion, so that different assertions are checked in parallel among multiple workers. Furthermore, the sub-problem analysis performed by each worker is guided by the checked assertion to avoid irrelevant path exploration and is prioritized based on the distance towards the checked assertion to provide earlier feedback. A case study shows that our approach can provide a reduction in analysis time required for symbolic execution of Java programs compared to non-distributed approach using the Symbolic PathFinder tool.
['Guowei Yang', 'Quan Chau Dong Do', 'Junye Wen']
Distributed Assertion Checking Using Symbolic Execution
639,500
Numerical Study on Path Loss Characteristics Considering Antenna Positions on Car Body at Blind Intersection in Urban Area for Inter-Vehicle Communications Using 700MHz Band
['Suguru Imai', 'Kenji Taguchi', 'Takeshi Kawamura', 'Tatsuya Kashiwa']
Numerical Study on Path Loss Characteristics Considering Antenna Positions on Car Body at Blind Intersection in Urban Area for Inter-Vehicle Communications Using 700MHz Band
661,408
Communication networks are typically large, dynamic and extremely complicated. To deploy, maintain, and trouble-shoot such networks, it is essential to understand how network elements---such as servers, switches, virtual machines, and virtual network functions---are connected to one another, and be able to discover communication paths between them. It is also essential to understand how connections change over time, and be able to pose time-travel queries to retrieve information about past network states. This problem is becoming more acute with the advent of software defined networks, where network functions are virtualized and managed in a cloud infrastructure. We represent a communication network inventory as a graph where the nodes are network entities and edges represent relationships between them, e.g. hosted-on, communicates-with, etc. Querying such a graph, e.g. for troubleshooting, using existing graph query languages is too cumbersome for network analysts. Thus, in this paper we present Nepal---a network path query language, which is designed to effectively retrieve desired paths from a network graph. The main novelty of Nepal is to consider paths as first-class citizens of the language, which achieves closure under composition while maintaining simplicity. We demonstrate the capabilities of Nepal by examples and discuss query evaluation. We illustrate how path queries can simplify the extraction of information from a dynamic inventory of a multi-layer network and can be used for troubleshooting.
['Theodore Johnson', 'Yaron Kanza', 'Laks V. S. Lakshmanan', 'Vladislav Shkapenyuk']
Nepal: a path query language for communication networks
875,013
Fast and efficient design space exploration is a critical requirement for designing computer systems, however, the growing complexity of hardware/software systems and significantly long run-times of detailed simulators often makes it challenging. Machine learning (ML) models have been proposed as popular alternatives that enable fast exploratory studies. The accuracy of any ML model depends heavily on the re-presentativeness of applications used for training the predictive models. While prior studies have used standard benchmarks or hand-tuned micro-benchmarks to train their predictive models, in this paper, we argue that it is often sub-optimal because of their limited coverage of the program state-space and their inability to be representative of the larger suite of real-world applications. In order to overcome challenges in creating representative training sets, we propose Genesys, an automatic workload generation methodology and framework, which builds upon key low-level application characteristics and enables systematic generation of applications covering a broad range of program behavior state-space without increasing the training time. We demonstrate that the automatically generated training sets improve upon the state-space coverage provided by applications from popular benchmarking suites like SPEC-CPU2006, MiBench, Media Bench, TPC-H by over llx and improve the accuracy of two machine learning based power and performance prediction systems by over 2.5x and 3.6x respectively.
['Reena Panda', 'Xinnian Zheng', 'Shuang Song', 'Jee Ho Ryoo', 'Michael LeBeane', 'Andreas Gerstlauer', 'Lizy Kurian John']
Genesys: Automatically generating representative training sets for predictive benchmarking
983,241
This paper presents a framework that enables human-machine interaction with generic visualization devices using personal consumer electronics equipment and scaleinvariant feature matching image processing techniques. The proposed implementation tracks the location and orientation of a generic mobile device equipped with an onboard camera and makes it possible to develop innovative and affordable user interfaces. The framework matches the time-variant features extracted from the frame buffer of a 2D or 3D scene with the features extracted from the camera-generated images that contain the displayed scene in order to derive positioning information with six degrees of freedom. Experimental results regarding precision, accuracy and latency confirmed the applicability of this solution as both an alternative user interface for desktop applications as well as an affordable method to support interaction with virtual environments.
['Cesare Celozzi', 'Fabrizio Lamberti', 'Gianluca Paravati', 'Andrea Sanna']
Controlling generic visualization environments using handheld devices and natural feature tracking
172,900
A wireless sensor network (WSN) can be construed as an intelligent, large-scale device for observing and measuring properties of the physical world. In recent years, the database research community has championed the view that if we construe a WSN as a database (i.e., if a significant aspect of its intelligent behavior is that it can execute declaratively-expressed queries), then one can achieve a significant reduction in the cost of engineering the software that implements a data collection program for the WSN while still achieving, through query optimization, very favorable cost:benefit ratios. This paper describes a query processing framework for WSNs that meets many desiderata associated with the view of WSN as databases. The framework is presented in the form of compiler/optimizer, called SNEE, for a continuous declarative query language over sensed data streams, called SNEEql. SNEEql can be shown to meet the expressiveness requirements of a large class of applications. SNEE can be shown to generate effective and efficient query evaluation plans. More specifically, the paper describes the following contributions: (1) a user-level syntax and physical algebra for SNEEql, an expressive continuous query language over WSNs; (2) example concrete algorithms for physical algebraic operators defined in such a way that the task of deriving memory, time and energy analytical cost-estimation models (CEMs) for them becomes straightforward by reduction to a structural traversal of the pseudocode; (3) CEMs for the concrete algorithms alluded to; (4) an architecture for the optimization of SNEEql queries, called SNEE, building on well-established distributed query processing components where possible, but making enhancements or refinements where necessary to accommodate the WSN context; (5) algorithms that instantiate the components in the SNEE architecture, thereby supporting integrated query planning that includes routing, placement and timing; and (6) an empirical performance evaluation of the resulting framework.
['Ixent Galpin', 'Christian Y. A. Brenninkmeijer', 'Alasdair Gray', 'Farhana Jabeen', 'Alvaro A. A. Fernandes', 'Norman W. Paton']
SNEE: a query processor for wireless sensor networks
276,376
In this paper, we propose a novel structure for implementing a kernel adaptive filter as an add-on component for a linear adaptive filter. The kernel adaptive filter has been proposed as a solution to non-linear adaptive problems and their effectiveness has been demonstrated. However, it is not intended for replacing the linear adaptive filters at all, rather, we expect it to complement the performance of linear ones in nonlinear environments. We, therefore, consider a novel structure which enables us to implement a kernel adaptive filter as an add-on for a linear adaptive filter. The proposed structure performs as a linear adaptive filter in the linear-dominant environments, however, in non-linear environments, we can add a kernel adaptive filter without any modification on the operation of the linear one. The effectiveness of the proposed method is confirmed through the computer simulations.
['Kiyoshi Nishikawa', 'Felix Albu']
Implementation method of kernel adaptive filter as an add-on for a linear adaptive filter
574,422
On decrit une methode pour la reconstruction des X-arbres (arbres values admettant X comme ensemble de feuilles) a partir de tableaux de distances incomplets (ou certaines valeurs sont incertaines ou inconnues). Elle permet de construire un arbre non oriente a partir de 2n - 3 valeurs de distance entre les n elements de X, sous des conditions qui sont explicitees. Cette construction est basee sur une relation entre X-arbres et 2-arbres values generalises d'ensemble de sommets X.
['Alain Guénoche', 'Bruno Leclerc']
The triangles method to build X-trees from incomplete distance matrices
321,670
Digital testing in the last three decades has taught us the value of design for testability (DFT). Disciplines such as scan and built-in self-test (BIST) have emerged as standard practices because they allow logic testing of arbitrarily large systems. This has been one of the greatest achievements in testing thus far. These past decades have also produced significant advances in semiconductor technology, which make extremely fine features and larger scales of integration possible. The beginning of the new millennium is an era of the system-on-a-chip (SOC). Today's specialized SOCs will soon become large-volume production chips and there will lie our testing challenge of the new millennium.
['Vishwani D. Agrawal']
Testing in the fourth dimension
179,611
Motivation: The use of dense single nucleotide polymorphism (SNP) data in genetic linkage analysis of large pedigrees is impeded by significant technical, methodological and computational challenges. Here we describe Superlink-Online SNP, a new powerful online system that streamlines the linkage analysis of SNP data. It features a fully integrated flexible processing workflow comprising both well-known and novel data analysis tools, including SNP clustering, erroneous data filtering, exact and approximate LOD calculations and maximum-likelihood haplotyping. The system draws its power from thousands of CPUs, performing data analysis tasks orders of magnitude faster than a single computer. By providing an intuitive interface to sophisticated state-of-the-art analysis tools coupled with high computing capacity, Superlink-Online SNP helps geneticists unleash the potential of SNP data for detecting disease genes.#R##N##R##N#Results: Computations performed by Superlink-Online SNP are automatically parallelized using novel paradigms, and executed on unlimited number of private or public CPUs. One novel service is large-scale approximate Markov Chain–Monte Carlo (MCMC) analysis. The accuracy of the results is reliably estimated by running the same computation on multiple CPUs and evaluating the Gelman–Rubin Score to set aside unreliable results. Another service within the workflow is a novel parallelized exact algorithm for inferring maximum-likelihood haplotyping. The reported system enables genetic analyses that were previously infeasible. We demonstrate the system capabilities through a study of a large complex pedigree affected with metabolic syndrome.#R##N##R##N#Availability: Superlink-Online SNP is freely available for researchers at http://cbl-hap.cs.technion.ac.il/superlink-snp. The system source code can also be downloaded from the system website.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information:Supplementary data are available at Bioinformatics online.
['Mark Silberstein', 'Omer Weissbrod', 'Lars Otten', 'Anna Tzemach', 'Andrei Anisenia', 'Oren Shtark', 'Dvir Tuberg', 'Eddie Galfrin', 'Irena Gannon', 'Adel Shalata', 'Zvi Borochowitz', 'Rina Dechter', 'Elizabeth A. Thompson', 'Dan Geiger']
A system for exact and approximate genetic linkage analysis of SNP data in large pedigrees
417,517
This paper presents fast moving window algorithms for calculating local statistics in a diamond, hexagon, and general polygonal shaped windows of an image which is important for real-time applications. The algorithms for a diamond shaped window requires only seven or eight additions and subtractions per pixel. A fast sparse algorithm only needs four additions and subtractions for a sparse diamond shaped window. A number of other shapes of diamond windows such as skewed or parallelogram shaped diamond, long diamond, and lozenged diamond shaped, are also investigated. Similar algorithms are also developed for hexagon shaped windows. The computation for a hexagon window only needs eight additions and subtractions for each pixel. Fast algorithms for general polygonal shaped windows are also developed. The computation cost of all these algorithms is independent of the window size. A variety of synthetic and real images have been tested.
['Changming Sun']
Moving average algorithms for diamond, hexagon, and general polygonal shaped window operations
376,145
Electrical Stimulation System to Relax the Jaw Elevation Muscles in People with Nocturnal Bruxism
['Pablo Aqueveque', 'R. Lopez', 'Esteban J. Pino']
Electrical Stimulation System to Relax the Jaw Elevation Muscles in People with Nocturnal Bruxism
800,182
We propose a method for measuring symmetry in images by using filter responses from Convolutional Neural Networks (CNNs). The aim of the method is to model human perception of left/right symmetry as closely as possible. Using the Convolutional Neural Network (CNN) approach has two main advantages: First, CNN filter responses closely match the responses of neurons in the human visual system; they take information on color, edges and texture into account simultaneously. Second, we can measure higher-order symmetry, which relies not only on color, edges and texture, but also on the shapes and objects that are depicted in images. We validated our algorithm on a dataset of 300 music album covers, which were rated according to their symmetry by 20 human observers, and compared results with those from a previously proposed method. With our method, human perception of symmetry can be predicted with high accuracy. Moreover, we demonstrate that the inclusion of features from higher CNN layers, which encode more abstract image content, increases the performance further. In conclusion, we introduce a model of left/right symmetry that closely models human perception of symmetry in CD album covers.
['Anselm Brachmann', 'Christoph Redies']
Using Convolutional Neural Network Filters to Measure Left-Right Mirror Symmetry in Images
951,922
This paper addresses 3D face recognition from facial shape. Firstly, we present an effective method to automatically extract ROI of facial surface, which mainly depends on automatic detection of facial bilateral symmetry plane and localization of nose tip. Then we build a reference plane through the nose tip for calculating the relative depth values. Considering the non-rigid property of facial surface, the ROI is triangulated and parameterized into an isomorphic 2D planar circle, attempting to preserve the intrinsic geometric properties. At the same time the relative depth values are also mapped. Finally we perform eigenface on the mapped relative depth image. The entire scheme is insensitive to pose variance. The experiment using FRGC database v1.0 obtains the rank-1 identification score of 95%, which outperforms the result of the PCA base-line method by 4%, which demonstrates the effectiveness of our algorithm.
['Gang Pan', 'Shi Han', 'Zhaohui Wu', 'Yueming Wang']
3D Face Recognition using Mapped Depth Images
261,153
Design patterns for parallel computing attempt to make the field accessible to nonexperts by generalizing the common techniques experts use to develop parallel software. Existing parallel patterns have tremendous descriptive power, but it is often unclear to nonexperts how to choose a pattern based on the specific performance goals of a given application. This paper addresses the need for a pattern selection methodology by presenting four patterns and an accompanying decision framework for choosing from these patterns given an application's throughput and latency goals. The patterns are based on recognizing that one can partition an application's data or instructions and that these partitionings can be done in time or space, hence we refer to them as spatiotemporal partitioning strategies. This paper introduces a taxonomy that describes each of the resulting four partitioning strategies and presents a three-step methodology for selecting one or more given a throughput and latency goal. Several case studies are presented to illustrate the use of this methodology. These case studies cover several simple examples as well as more complicated applications including a radar processing application and an H.264 video encoder.
['Henry Hoffmann', 'Anant Agarwal', 'Srinivas Devadas']
Selecting Spatiotemporal Patterns for Development of Parallel Applications
537,856
Dual homing is a fault-tolerance mechanism generally used in IP-based access networks to increase the survivability of the network. In a dual-homing architecture, a host is connected to two different access routers; therefore, it is unlikely that the host will be denied access to the network as the result of a failure in the access network, a failure of the access router, or congestion at the access router. However, dual homing cannot provide survivability with respect to possible failures in the optical core network. To provide survivability in the core network, optical protection and restoration techniques must be used. In the past, dual homing architectures and optical protection schemes have been studied independently of one another. This paper studies coordinated multi-layer survivability techniques that use both dual-homing schemes and optical protection schemes in an IP-based access network over a WDM-based optical core network. Specifically, we investigate the protection design problem in the WDM core network, given that a dual-homing infrastructure is implemented in the access network. Several solutions are proposed, and it is shown that the proposed coordinated survivability schemes can reduce cost compared to the case in which the survivability mechanisms arc not coordinated between the IP layer and the optical layer.
['Vinod M. Vokkarane', 'Jianping Wang', 'Jason P. Jue']
Coordinated survivability in IP-over-optical networks with IP-layer dual-homing and optical-layer protection
361,060
In this paper, we describe the initial results of the formative phase of a project that crosses international borders. Alice in the Middle East (Alice ME) is a project designed to adapt the Alice software, develop new curricular materials, and provide professional development for teachers and students in the Middle East. For those who may be considering a collaborative project that would be conducted across international borders, we share lessons learned.
['Saquib Razak', 'Huda Gedawy', 'Wanda Dann', 'Donald Slater']
Alice in the Middle East: An Experience Report from the Formative Phase
672,633
This paper describes the effects of false loops caused by resource sharing. When a separate controller and data path are constructed, two types of false loops can be distinguished: the ones that go through the controller and the ones that loop around in the data path. The paper describes a model to detect both types of loops during the resource sharing phase. Based on this model an algorithm is described which prevents false loops in the combinatorial network to be constructed, while maintaining as much freedom as possible for the resource sharing. Ezperiments show that the loop-free data-paths do not need more functional units than the ones that contain false loops.
['Leon Stok']
False loops through resource sharing
409,808
The glove puppet is a traditional art in Taiwan. The master puppeteer manipulates the glove puppets and brings each animated puppet character to life. In this work, we attempt to robotize the glove puppet (called X-puppet) along with three types of manipulation interface including a motion editor, a data glove and a motion capture system. The motion editor is a higher level puppet motion composer that creates and combines sequences of control in a timely fashion. The special designed data glove uses a minimum number of sensors to achieve puppet manipulation. It measures the gesture data and maps to the motion of X-puppet in real-time. The motion capture system can extract the puppetpsilas motion from the video and then control the X-puppet to simulate the action. The whole system represents an effort to give this traditional art a new style of performance.
['Jwu-Sheng Hu', 'Jyun-Ji Wang', 'Guan-Qun Sun']
The glove puppet robot: X-puppet
453,083
Math Web Search Interfaces and the Generation Gap of Mathematicians
['Andrea Kohlhase']
Math Web Search Interfaces and the Generation Gap of Mathematicians
769,516
The ocean environmental parameters of fishing ground have an important affection on the distribution and abundance of fishing ground. In this paper, a GIS-based method of retrieving the ocean environmental parameters of fishing ground is introduced. In case study, this method was used to study the relationships between Kuroshio path variations and CPUE of squid (Ommastrephes Bartrami) fishing ground in Northwest Pacific Ocean.
['Quanqin Shao', 'Haijun Yang', 'Zhuoqi Chen']
A GIS-based method for retrieving ocean environmental parameters of fishing grounds
183,562
Parallel decoding is required for low density parity check (LDPC) codes to achieve high decoding throughput, but it suffers from a large set of registers and complex interconnections due to randomly located 1's in the sparse parity check matrix. This paper proposes a new LDPC decoding architecture to reduce registers and alleviate complex interconnections. To reduce the number of messages to be exchanged among processing units, two data flows that can be loosely coupled are developed by allowing duplicated operations. In addition, a partially parallel architecture is proposed to promote the memory usage and an efficient algorithm that schedules the processing order of the partially parallel architecture is also proposed to reduce the overall processing time by overlapping operations. To verify the proposed architecture, a 1024 bit rate-1/2 LDPC decoder is designed using a 0.18 /spl mu/m CMOS process. The decoder occupies an area of 10.08mm/sup 2/ and provides almost 1Gbps decoding throughput at the frequency of 200MHz.
['Se-Hyeon Kang', 'In-Cheol Park']
Loosely coupled memory-based decoding architecture for low density parity check codes
70,017
The similarity join is an important database primitive which has been successfully applied to speed up applications such as similarity search, data analysis and data mining. The similarity join combines two point sets of a multidimensional vector space such that the result contains all point pairs where the distance does not exceed a parameter e. In this paper, we propose the Epsilon Grid Order, a new algorithm for determining the similarity join of very large data sets. Our solution is based on a particular sort order of the data points, which is obtained by laying an equi-distant grid with cell length e over the data space and comparing the grid cells lexicographically. A typical problem of grid-based approaches such as MSJ or the e-kdB-tree is that large portions of the data sets must be held simultaneously in main memory. Therefore, these approaches do not scale to large data sets. Our technique avoids this problem by an external sorting algorithm and a particular scheduling strategy during the join phase. In the experimental evaluation, a substantial improvement over competitive techniques is shown.
['Christian Böhm', 'Bernhard Braunmüller', 'Florian Krebs', 'Hans-Peter Kriegel']
Epsilon grid order: an algorithm for the similarity join on massive high-dimensional data
577,676
Traffic sign recognition plays an important role in autonomous vehicles as well as advanced driver assistance systems. Although various methods have been developed, it is still difficult for the state-of-the-art algorithms to obtain high recognition precision with low computational costs. In this paper, based on the investigation on the influence that color spaces have on the representation learning of convolutional neural network, a novel traffic sign recognition approach called DP-KELM is proposed by using a kernel-based extreme learning machine (KELM) classifier with deep perceptual features. Unlike the previous approaches, the representation learning process in DP-KELM is implemented in the perceptual Lab color space. Based on the learned deep perceptual feature, a kernel-based ELM classifier is trained with high computational efficiency and generalization performance. Through the experiments on the German traffic sign recognition benchmark, the proposed method is demonstrated to have higher precision than most of the state-of-the-art approaches. In particular, when compared with the hinge loss stochastic gradient descent method which has the highest precision, the proposed method can achieve a comparable recognition rate with significantly fewer computational costs.
['Yujun Zeng', 'Xin Xu', 'Dayong Shen', 'Yuqiang Fang', 'Zhipeng Xiao']
Traffic Sign Recognition Using Kernel Extreme Learning Machines With Deep Perceptual Features
944,316
An important approach for unsupervised landcover classification in remote sensing images is the clustering of pixels in the spectral domain into several fuzzy partitions. In this paper, a multiobjective optimization algorithm is utilized to tackle the problem of fuzzy partitioning where a number of fuzzy cluster validity indexes are simultaneously optimized. The resultant set of near-Pareto-optimal solutions contains a number of nondominated solutions, which the user can judge relatively and pick up the most promising one according to the problem requirements. Real-coded encoding of the cluster centers is used for this purpose. Results demonstrating the effectiveness of the proposed technique are provided for numeric remote sensing data described in terms of feature vectors. Different landcover regions in remote sensing imagery have also been classified using the proposed technique to establish its efficiency
['Sanghamitra Bandyopadhyay', 'Ujjwal Maulik', 'Anirban Mukhopadhyay']
Multiobjective Genetic Clustering for Pixel Classification in Remote Sensing Imagery
135,236
We define a duality between Gaussian multiple-access channels (MACs) and Gaussian broadcast channels (BCs). The dual channels we consider have the same channel gains and the same noise power at all receivers. We show that the capacity region of the BC (both constant and fading) can be written in terms of the capacity region of the dual MAC, and vice versa. We can use this result to find the capacity region of the MAC if the capacity region of only the BC is known, and vice versa. For fading channels we show duality under ergodic capacity, but duality also holds for different capacity definitions for fading channels such as outage capacity and minimum-rate capacity. Using duality, many results known for only one of the two channels can be extended to the dual channel as well.
['Nihar Jindal', 'Sriram Vishwanath', 'Andrea J. Goldsmith']
On the duality of Gaussian multiple-access and broadcast channels
218,475
Query containment and query answering are two important computational tasks in databases. While query answering amounts to computing the result of a query over a database, query containment is the problem of checking whether, for every database, the result of one query is a subset of the result of another query. In this article, we deal with unions of conjunctive queries, and we address query containment and query answering under description logic constraints. Every such constraint is essentially an inclusion dependency between concepts and relations, and their expressive power is due to the possibility of using complex expressions in the specification of the dependencies, for example, intersection and difference of relations, special forms of quantification, regular expressions over binary relations. These types of constraints capture a great variety of data models, including the relational, the entity-relationship, and the object-oriented model, all extended with various forms of constraints. They also capture the basic features of the ontology languages used in the context of the Semantic Web. We present the following results on both query containment and query answering. We provide a method for query containment under description logic constraints, thus showing that the problem is decidable, and analyze its computational complexity. We prove that query containment is undecidable in the case where we allow inequalities in the right-hand-side query, even for very simple constraints and queries. We show that query answering under description logic constraints can be reduced to query containment, and illustrate how such a reduction provides upper-bound results with respect to both combined and data complexity.
['Diego Calvanese', 'Giuseppe De Giacomo', 'Maurizio Lenzerini']
Conjunctive query containment and answering under description logic constraints
237,520
We consider the following maximum disjoint paths problem (MDPP). We are given a large network, and pairs of nodes that wish to communicate over paths through the network-the goal is to simultaneously connect as many of these pairs as possible in such a way that no two communication paths share an edge in the network. This classical problem has been brought into focus recently in papers discussing applications to routing in high-speed networks, where the current lack of understanding of the MDPP is an obstacle to the design of practical heuristics. We consider the class of densely embedded, nearly-Eulerian graphs, which includes the two-dimensional mesh and other planar and locally planar interconnection networks. We obtain a constant-factor approximation algorithm for the maximum disjoint paths problem for this class of graphs; this improves on an O(log n)-approximation for the special case of the two-dimensional mesh due to Aumann-Rabani and the authors. For networks that are not explicitly required to be "high-capacity," this is the first constant-factor approximation for the MDPP in any class of graphs other than trees. We also consider the MDPP in the on-line setting, relevant to applications in which connection requests arrive over time and must be processed immediately. Here we obtain an asymptptically optimal O(log n)competitive on-line algorithm for the same class of graphs; this improves on an O(log n log log n) competitive algorithm for the special case of the mesh due to B. Awerbuch et al (1994).
['J. Kzeinberg', 'Éva Tardos']
Disjoint paths in densely embedded graphs
38,921
Motivation: In recent years, genome-scale metabolic models (GEMs) have played important roles in areas like systems biology and bioinformatics. However, because of the complexity of gene– reaction associations, GEMs often have limitations in gene level analysis and related applications. Hence, the existing methods were mainly focused on applications and analysis of reactions and metabolites. Results: Here, we propose a framework named logic transformation of model (LTM) that is able to simplify the gene–reaction associations and enables integration with other developed methods for gene level applications. We show that the transformed GEMs have increased reaction and metabolite number as well as degree of freedom in flux balance analysis, but the gene–reaction associations and the main features of flux distributions remain constant. In addition, we develop two methods, OptGeneKnock and FastGeneSL by combining LTM with previously developed reaction-based methods. We show that the FastGeneSL outperforms exhaustive search. Finally, we demonstrate the use of the developed methods in two different case studies. We could design fast genetic intervention strategies for targeted overproduction of biochemicals and identify double and triple synthetic lethal gene sets for inhibition of hepatocellular carcinoma tumor growth through the use of OptGeneKnock and FastGeneSL, respectively. Availability and implementation: Source code implemented in MATLAB, RAVEN toolbox and COBRA toolbox, is public available at https://sourceforge.net/projects/logictransformationofmodel. Contact: [email protected] or [email protected] Supplementary information: Supplementary data are available at Bioinformatics Online.
['Cheng Zhang', 'Boyang Ji', 'Adil Mardinoglu', 'Jens Nielsen', 'Qiang Hua']
Logical transformation of genome-scale metabolic models for gene level applications and analysis
44,936
Nowadays, most embedded devices need to support multiple applications running concurrently. In contrast to desktop computing, very often the set of applications is known at design time and the designer needs to assure that critical applications meet their constraints in every possible use-case. In order to do this, all possible use-cases, i.e. subset of applications running simultaneously, have to be verified thoroughly. An approach to reduce the verification effort, is to perform composability analysis which has been studied for sets of applications modeled as Synchronous Dataflow Graphs. In this paper we introduce a framework that supports a more general parallel programming model based on the Kahn Process Networks Model of Computation and integrates a complete MPSoC programming environment that includes: compiler-centric analysis, performance estimation, simulation as well as mapping and scheduling of multiple applications. In our solution, composability analysis is performed on parallel traces obtained by instrumenting the application code. A case study performed on three typical embedded applications, JPEG, GSM and MPEG-2, proved the applicability of our approach.
['Jeronimo Castrillon', 'Ricardo Velásquez', 'Anastasia Stulova', 'Weihua Sheng', 'Jianjiang Ceng', 'Rainer Leupers', 'Gerd Ascheid', 'Heinrich Meyr']
Trace-based KPN composability analysis for mapping simultaneous applications to MPSoC platforms
434,702
A major obstacle in single-cell sequencing is sample contamination with foreign DNA. To guarantee clean genome assemblies and to prevent the introduction of contamination into public databases, considerable quality control efforts are put into post-sequencing analysis. Contamination screening generally relies on reference-based methods such as database alignment or marker gene search, which limits the set of detectable contaminants to organisms with closely related reference species. As genomic coverage in the tree of life is highly fragmented, there is an urgent need for a reference-free methodology for contaminant identification in sequence data. We present acdc, a tool specifically developed to aid the quality control process of genomic sequence data. By combining supervised and unsupervised methods, it reliably detects both known and de novo contaminants. First, 16S rRNA gene prediction and the inclusion of ultrafast exact alignment techniques allow sequence classification using existing knowledge from databases. Second, reference-free inspection is enabled by the use of state-of-the-art machine learning techniques that include fast, non-linear dimensionality reduction of oligonucleotide signatures and subsequent clustering algorithms that automatically estimate the number of clusters. The latter also enables the removal of any contaminant, yielding a clean sample. Furthermore, given the data complexity and the ill-posedness of clustering, acdc employs bootstrapping techniques to provide statistically profound confidence values. Tested on a large number of samples from diverse sequencing projects, our software is able to quickly and accurately identify contamination. Results are displayed in an interactive user interface. Acdc can be run from the web as well as a dedicated command line application, which allows easy integration into large sequencing project analysis workflows. Acdc can reliably detect contamination in single-cell genome data. In addition to database-driven detection, it complements existing tools by its unsupervised techniques, which allow for the detection of de novo contaminants. Our contribution has the potential to drastically reduce the amount of resources put into these processes, particularly in the context of limited availability of reference species. As single-cell genome data continues to grow rapidly, acdc adds to the toolkit of crucial quality assurance tools.
['Markus Lux', 'Jan Philipp Krüger', 'Christian Rinke', 'Irena Maus', 'Andreas Schlüter', 'Tanja Woyke', 'Alexander Sczyrba', 'Barbara Hammer']
acdc – Automated Contamination Detection and Confidence estimation for single-cell genome data
968,934
Nodes in energy-efficient ad-hoc wireless networks can benefit more of new ad-hoc services, as well as, can support better new emerging ad-hoc applications. In this paper, we introduce an approach to achieve the energy efficiency based on both optimal transmission range and topology management of ad-hoc wireless networks. We derivethe relationship between the optimal transmission range (optimal grid length) and different intensities of static network traffic in the equal-grid rectangular GAF (the Geographical Adaptive Fidelity topology management protocol) network and implement this result into dynamic network traffic scenario with adjustable-grid rectangular GAF network to achieve the energy efficiency. Then we compare the energy consumption result of adjustable-grid model to the result of traditional GAF model (equal-grid), which is derived in the same dynamic traffic scenario.Our results show that the adjustable-grid model saves 78.1% energy in comparison to the minimum energy consumption of equal-grid model.
['W. Feng', 'Hamada Alshaer', 'Jaafar M. H. Elmirghani']
Energy Efficiency: Optimal Transmission Range with Topology Management in Rectangular Ad-hoc Wireless Networks
430,890
This paper studies how to select input parameters in density clustering of hot region prediction. There are two parameters radius and density in density-based incremental clustering. We firstly fix density and enumerate radius to find a pair of parameters which leads to maximum number of clusters, and then we fix radius and enumerate density to find another pair of parameters which leads to maximum number of clusters. Experiment results show that the proposed method using both two pairs of parameters provides better prediction performance than the other method, and compare these two predictive results, the result by fixing radius and enumerating density have slightly higher prediction accuracy than that by fixing density and enumerating radius.
['Jing Hu', 'Xiaolong Zhang']
Prediction of hot regions in protein-protein interaction by density-based incremental clustering with parameter selection
580,902
Information about the reliabilities; of the received symbols is very beneficial in frequency-hop communication systems. This information, known as side information, can be used to erase unreliable symbols at the input to an errors-and-erasures decoder. For slow-frequency-hop systems it is common that special redundant symbols, referred to as side-information symbols, are included in each dwell interval, and the demodulation of these symbols provides the side information. The corresponding decrease in the number of message symbols that can be sent in each dwell interval makes it desirable to develop alternative methods that do not require side-information symbols. In this paper, one such alternative is proposed, and its performance is evaluated for channels with partial-band interference. It is shown that staggered interleaving of Reed-Solomon code words and iterative errors-and-erasures decoding facilitates the erasure of unreliable symbols without the need for side-information symbols. The performance of this method is compared with the performance of systems that employ standard block interleaving and errors-only decoding or errors-and-erasures decoding with side information obtained from test symbols.
['Thomas G. Macdonald', 'Michael B. Pursley']
Staggered interleaving and iterative errors- and-erasures decoding for frequency-hop packet radio
62,253
We describe an attempt to bridge the gap between educational research and practical innovation by making a package of changes to an introductory programming module based on the insights of existing theoretical work. Theoretical principles are described, used to evaluate previous practices and then employed to guide systematic changes. Preliminary evaluation indicates substantial improvements in student performance and enjoyment while indicating areas in need of further work.
['John Davy', 'Tony Jenkins']
Research-led innovation in teaching and learning programming
249,780
We study the exact bit error rate (BER) performance of convolutional codes with hard decision Viterbi decoding when used over fading additive white Gaussian noise (AWGN) channels with maximum ratio combining (MRC) diversity. Analytical expressions for the bit error rates in Rayleigh, Rician and Nakagami-m fading channels are derived. The accuracy of the analytical results is verified by computer simulation.
['Golnaz Farhadi', 'Norman C. Beaulieu']
Performance Analysis of Convolutional Codes over Fading Channels with Maximum Ratio Combining Diversity
261,092
In order to assess Web applications in a more consistent way we have to deal not only with non-functional requirement specification, measurement and evaluation (M&E) information but also with the context information about the evaluation project. When organizations record the collected data from M&E projects, the context information is very often neglected. This can jeopardize the validity of comparisons among similar evaluation projects. We highlight this concern by introducing a quality in use assessment scenario. Then, we propose a solution by representing the context information as a new add-in to the INCAMI M&E framework. Finally, we show how context information can improve Web application evaluations, particularly, data analysis and recommendation processes.
['Hernan Molina', 'Luis Olsina']
Assessing Web Applications Consistently: A Context Information Approach
206,587
Reconciling dependent product and classical logic with call-by-value evaluation is a difficult problem. It is the first step toward a classical proof system for an ML-like language. In such a system, the introduction rule for universal quantification and the elimination rule for dependent product need to be restricted: they can only be applied to values. This value restriction is acceptable for universal quantification (ML-like polymorphism) but makes dependent product unusable in practice. In order to circumvent this limitation we introduce new typing rules and prove their consistency by constructing a realizability model in three layers (values, stacks and terms). Such a technique has already been used to account for classical ML-like polymorphism in call-by-value, and we here extend it to handle dependent product. The main idea is to internalize observational equivalence as a new non-computable operation in the calculus. A crucial property of the obtained model is that the biorthogonal of a set of values which is closed under observational equivalence does not contain more values than the original set.
['Rodolphe Lepigre']
A Realizability Model for a Semantical Value Restriction
694,802
We present a general multiple path scattering model for multiple transmit, multiple receive wireless systems. The model is generated using physical modelling of the scatterers surrounding transmit and the receive arrays. The condition of the resulting multiple-input multiple-output (MIMO) transfer matrix is then examined. A key parameter /spl eta/ which determines the channel condition is identified. This parameter depends on the local scatter geometries, the separation of arrays, and the wavelength of the transmitted signal. We show that there exists a critical value for this parameter at which the channel condition changes sharply. The implication is that the promised linear growth in channel capacity may not eventuate if the separation of the transmit and receive arrays is large (/spl ap/10/spl times/ or 20/spl times/ at 2 GHz) compared with the distance from array elements to local scatterers. Monte-Carlo simulations are used to demonstrate these claims.
['Leif Hanlen', 'Minyue Fu']
Multiple antenna wireless communication systems: limits to capacity growth
217,241
Distributing power and ground to a vertically integrated system is a complex and difficult task. Interplane communication and power delivery are achieved by through silicon vias (TSVs) in most of the manufacturing techniques for three-dimensional (3-D) circuits. As shown in this paper, these vertical interconnects provide additional low impedance paths for distributing power and ground within a 3-D circuit. These paths, however, have not been considered in the design process of 3-D power and ground distribution networks. By exploiting these additional paths, the IR drop within each plane is reduced. Alternatively, the routing congestion caused by the TSVs can be decreased by removing stacks of metal vias that are used within a power distribution network. Additionally, the required decoupling capacitance for a circuit can be reduced, resulting in significant savings in area. Case studies of power grids demonstrate a significant reduction of 22% in the number of intraplane vias. Alternatively, a 25% decrease in the decoupling capacitance can be achieved.
['Vasilis F. Pavlidis', 'Giovanni De Micheli']
Power distribution paths in 3-D ICS
102,383
The spectacular evolution of sensor networks and the proliferation of location-sensing devices in daily life activities are leading to an explosion of disparate spatio-temporal data. The collected data describes the movement of mobile objects used to construct what we call trajectory data. The diversity of these generated data has led to a variety of spatio-temporal models. In fact, the conceptual design can be achieved using either enhanced classical models such as spatio-temporal unied modeling language, spatio-temporal entity relationship, or ontological models such as web ontology language. Moreover, the diversity of conceptual formalisms highly increases the heterogeneity of sources as well as the diculty of interoperating between them. To reduce this complexity, we set up a high level formalism that covers the most important existing conceptual and ontological models. In fact, current abstract formalisms left out the representation of spatio-temporal properties. In this paper, we present a preliminary work that proposes an ontology-based pivot model for representing spatio-temporal sources.
['Marwa Manaa', 'Ladjel Bellatreche', 'Jalel Akaichi', 'Selma Khouri']
Towards An Ontology-Based Pivot Model for Spatio-Temporal Sources
688,086
Market-Driven Optimal Task Assignment in Spatial Crowdsouring
['Kaitian Tan', 'Qian Tao']
Market-Driven Optimal Task Assignment in Spatial Crowdsouring
904,029
Segmentation of Fetal Left Ventricle in Echocardiographic Sequences Based on Dynamic Convolutional Neural Networks
['Li Yu', 'Yi Guo', 'Yuanyuan Wang', 'Jinhua Yu', 'Ping Chen']
Segmentation of Fetal Left Ventricle in Echocardiographic Sequences Based on Dynamic Convolutional Neural Networks
945,345
Although component independence is a common underlying assumption in most reliability composition models, relatively little work focuses on this area. Preliminary work, however, shows that continuation passing style (CPS) compliance is an essential property of independent components. We provide a set of transformations which can be used to convert components to functionally equivalent CPS compliant components. We also present a tool to perform these transformations automatically. We include experimental evidence that our tool successfully produces CPS compliant components without altering the functionality of the original components. Use of our tool or manual transformations will aid in the creation of components whose reliabilities can be composed without violating the underlying assumptions of the composition models.
['Denise M. Woit', 'M. Fan']
Independence transformations and tools for components
189,246
The incorporation of temporal semantic into the traditional data mining techniques has caused the creation of a new area called Temporal Data Mining. This incorporation is especially necessary if we want to extract useful knowledge from dynamic domains, which are time-varying in nature. However, this process is computationally complex, and therefore it poses more challenges on efficient processing that non-temporal techniques. Based in the inter-transactional framework, in [11] we proposed an algorithm named TSET for mining temporal patterns (sequences) from datasets which uses a unique tree-based structure for storing all frequent patterns discovered in the mining process. However, in each data mining process, the algorithm must generate the whole structure from scratch. In this work, we propose an extension which consists in the reusing of structures generated in previous data mining process in order to reduce the execution time of the algorithm.
['Francisco Guil', 'Antonio B. Bailón', 'Alfonso Bosch', 'Roque Marín']
An iterative method for mining frequent temporal patterns
903,324
The most challenging issue of conventional Time Amplifiers (TAs) is their limited Dynamic Range (DR). This paper presents a mathematical analysis to clarify principle of operation of conventional 2× TA's. The mathematical derivations release strength reduction of the current sources of the TA is the simplest way to increase DR. Besides, a new technique is presented to expand the Dynamic Range (DR) of conventional 2× TAs. Proposed technique employs current subtraction in place of changing strength of current sources using conventional gain compensation methods, which results in more stable gain over a wider DR. The TA is simulated using Spectre-rf in TSMC 0.18um COMS technology. DR of the 2× TA is expanded to 300ps only with 9% gain error while it consumes only 28uW from a 1.2V supply voltage.
['Hasan Molaei', 'Ata Khorami', 'Khosrow Hajsadeghi']
A wide dynamic range low power 2× time amplifier using current subtraction scheme
876,325
Visual and inertial sensors, in combination, are well-suited for many robot navigation and mapping tasks. However, correct data fusion, and hence overall system performance, depends on accurate calibration of the 6-DOF transform between the sensors (one or more camera(s) and an inertial measurement unit). Obtaining this calibration information is typically difficult and time-consuming. In this paper, we describe an algorithm, based on the unscented Kalman filter (UKF), for camera-IMU simultaneous localization, mapping and sensor relative pose self-calibration. We show that the sensor-to-sensor transform, the IMU gyroscope and accelerometer biases, the local gravity vector, and the metric scene structure can all be recovered from camera and IMU measurements alone. This is possible without any prior knowledge about the environment in which the robot is operating. We present results from experiments with a monocular camera and a low-cost solid-state IMU, which demonstrate accurate estimation of the calibration parameters and the local scene structure.
['Jonathan Kelly', 'Gaurav S. Sukhatme']
Visual-inertial simultaneous localization, mapping and sensor-to-sensor self-calibration
105,897
The (constrained) minimization of a ratio of set functions is a problem frequently occurring in clustering and community detection. As these optimization problems are typically NP-hard, one uses convex or spectral relaxations in practice. While these relaxations can be solved globally optimally, they are often too loose and thus lead to results far away from the optimum. In this paper we show that every constrained minimization problem of a ratio of non-negative set functions allows a tight relaxation into an unconstrained continuous optimization problem. This result leads to a flexible framework for solving constrained problems in network analysis. While a globally optimal solution for the resulting non-convex problem cannot be guaranteed, we outperform the loose convex or spectral relaxations by a large margin on constrained local clustering problems.
['Thomas B?hler', 'Shyam Sundar Rangapuram', 'Simon Setzer', 'Matthias Hein']
Constrained fractional set programs and their application in local clustering and community detection
460,130
This work discusses CHARM, a Composable Heterogeneous Accelerator-Rich Microprocessor design that provides scalability, flexibility, and design reuse in the space of accelerator-rich CMPs. CHARM features a hardware structure called the accelerator block composer (ABC), which can dynamically compose a set of accelerator building blocks (ABBs) into a loosely coupled accelerator (LCA) to provide orders of magnitude improvement in performance and power efficiency. Our software infrastructure provides a data flow graph to describe the composition, and our hardware components dynamically map available resources to the data flow graph to compose the accelerator from components that may be physically distributed across the CMP. Our ABC is also capable of providing load balancing among available compute resources to increase accelerator utilization. Running medical imaging benchmarks, our experimental results show an average speedup of 2.1X (best case 3.7X) compared to approaches that use LCAs together with a hardware resource manager. We also gain in terms of energy consumption (average 2.4X; best case 4.7X).
['Jason Cong', 'Mohammad Ali Ghodrat', 'Michael Gill', 'Beayna Grigorian', 'Glenn Reinman']
CHARM: a composable heterogeneous accelerator-rich microprocessor
103,914
By means of the limit and jump relations of classical potential theory with respect to the vectorial Helmholtz equation, a wavelet approach is established on a regular surface. The multiscale procedure is constructed in such a way that the emerging scalar, vectorial and tensorial potential kernels act as scaling functions. Corresponding wavelets are defined via a canonical refinement equation. A tree algorithm for fast decomposition of a tangential complex-valued vector field given on a regular surface is developed based on numerical integration rules. Some numerical test examples conclude the paper.
['Willi Freeden', 'Carsten Mayer']
MODELING TANGENTIAL VECTOR FIELDS ON REGULAR SURFACES BY MEANS OF MIE POTENTIALS
468,124
General Statistically Secure Computation with Bounded-Resettable Hardware Tokens.
['Nico Döttling', 'Daniel Kraschewski', 'Jörn Müller-Quade', 'Tobias Nilges']
General Statistically Secure Computation with Bounded-Resettable Hardware Tokens.
791,135
Near-Duplicate Web Video Retrieval and Localization Using Improved Edit Distance
['Hao Liu', 'Qingjie Zhao', 'Hao Wang', 'Cong Zhang']
Near-Duplicate Web Video Retrieval and Localization Using Improved Edit Distance
888,964
Motion is an important clue for industrial inspection, video surveillance, and service machines to localize and recognize products and objects. Because blur co-occurs with motion, it is desirable for developing efficient and robust motion blur detection algorithm. However, existing algorithms are inefficient for detecting spatially varying motion blur. To deal with the problem, this paper presents a theorem, according to which, motion blur can be efficiently detected and segmented. According to the theorem, the proposed algorithm requires a simple filtering operation and variance computation. Classification as either blurred or unblurred pixel can be done by substituting the variance into the proposed simple formula and checking the sign of the resulting value. Moreover, a geometric interpretation and two extensions of the algorithm are given. Importantly, based on the geometric interpretation of the indicator function, we develop a one-class classifier, which is more effective than the indicator function and has comparable computational cost of the indicator function. Experimental results on detecting motion-blurred cars, motorcycles, bicycles, bags, and persons demonstrate that the proposed algorithm is very efficient without loss of effectiveness.
['Yanwei Pang', 'H. T. Zhu', 'Xuelong Li', 'Jing Pan']
Motion Blur Detection With an Indicator Function for Surveillance Machines
723,753
Text-to-speech conversion has traditionally been performed either by concatenating short samples of speech or by using rule-based systems to convert a phonetic representation of speech into an acoustic representation, which is then converted into speech. This paper describes a text-to-speech synthesis system for modern standard Arabic based on artificial neural networks and residual excited LPC coder. The networks offer a storage-efficient means of synthesis without the need for explicit rule enumeration. These neural networks require large prosodically labeled continuous speech databases in their training stage. As such databases are not available for the Arabic language, we have developed one for this purpose. Thus, we discuss various stages undertaken for this development process. In addition to interpolation capabilities of neural networks, a linear interpolation of the coder parameters is performed to create smooth transitions at segment boundaries. A residual-excited all pole vocal tract model and a prosodic-information synthesizer based on neural networks are also described in this paper.
['Fatima Chouireb', 'Mhania Guerti']
Towards a high quality Arabic speech synthesis system based on neural networks and residual excited vocal tract model
533,005
Despite their potential applications in softwarecomprehension, it appears that dynamic visualisationtools are seldom used outside the research laboratory.This paper presents an empirical evaluation of fivedynamic visualisation tools - AVID, Jinsight, jRMTool,Together ControlCenter diagrams and TogetherControlCenter debugger. The tools were evaluated on anumber of general software comprehension and specificreverse engineering tasks using the HotDraw object-orientedframework. The tasks considered typicalcomprehension issues, including identification of softwarestructure and behaviour, design pattern extraction,extensibility potential, maintenance issues, functionalitylocation, and runtime load. The results revealed that thelevel of abstraction employed by a tool affects its successin different tasks, and that tools were more successful inaddressing specific reverse engineering tasks thangeneral software comprehension activities. It was foundthat no one tool performs well in all tasks, and some taskswere beyond the capabilities of all five tools. This paperconcludes with suggestions for improving the efficacy ofsuch tools.
['Michael Pacione', 'Marc Roper', 'Murray Wood']
A comparative evaluation of dynamic visualisation tools
441,561
The tracks analysis concerning students' actions logged by Technology Enhanced Learning systems can help teacher to improve their pedagogical scenarios making them relevant to students. Besides, sharing and reusing teachers' know-how and experience in the session analysis of learning systems also facilitate their scenario enhancement. In this paper, we present our proposal for obtaining these goals. We first describe the calculation method of pedagogical indicators from the tracks collected using the Usage Tracking Language and Data Combination Language. Then, we present our developed tool which allows executing these indicators. We also present results which validate our proposal with the tracks from the real learning environment. Our approach can be used to support engineering and re-engineering of pedagogical scenarios in Technology Enhanced Learning systems.
['Diem Pham Thi Ngoc', 'Sébastien Iksal', 'Christophe Choquet']
Re-engineering of Pedagogical Scenarios Using the Data Combination Language and Usage Tracking Language
322,073
One problem of robot integration in a flexible manufacturing cell is the efficient programming of the robot. In this paper a robot programming system is proposed which offers the possibility of user oriented, flexible robot programming. The programming system consists of four levels which enable fully automated robot programming with data from a previous planning system, manual editing of application oriented commands, and low level system programming and configuration. We have realised this under the Oberon programming system which offers object oriented programming features. As an example we have realised the charging of a press brake by a robot inside a flexible sheet metal forming cell. The basis for the design of the computer integrated bending cell is a global concept in which human resources, techniques and organisation are simultaneously planned. >
['Christoph Uhrhan', 'Red Roshardt']
User oriented robot programming in a bending cell
186,273
An integrated beamforming (spatial processing) and multiuser detection (temporal processing) scheme is an effective approach to increase system capacity, but is also impractical due to the high associated computational costs. The authors previously proposed joint domain localized (JDL) processing which achieves significantly lower computational cost and a faster convergence rate in terms of number of training symbols. The paper justifies the choice of the transformation matrix that is the basis for the JDL algorithm. Building on JDL processing, we also introduce a new processor that combines JDL processing and zero forcing for multi-cell uplink CDMA systems. Simulations show that this approach achieves better performance and a faster convergence rate than the JDL algorithm as well as the reduced rank and iterative schemes introduced by other researchers. If restricted by short training sequences, it even outperforms the theoretically optimal processor.
['Rebecca Y. M. Wong', 'Raviraj S. Adve']
Joint domain localized adaptive processing with zero forcing for multi-cell CDMA systems
99,741
Meaning, Function, Purpose, Usefulness, Consequences – Interconnected Concepts
['Ruy J. G. B. de Queiroz']
Meaning, Function, Purpose, Usefulness, Consequences – Interconnected Concepts
457,103
Which computational problems can be solved in polynomial-time and which cannot? Though seemingly technical, this question has wide-ranging implications and brings us to the heart of both theoretical computer science and modern physics.
['Stephen P. Jordan']
Black holes, quantum mechanics, and the limits of polynomial-time computability
893,175
This work addresses what we believe to be a central issue in the field of XML diff and merge computation: the mathematical modeling of the so-called "editing deltas" and the study of their formal abstract properties. We expect at least three outputs from this theoretical work: a common basis to compare performances of the various algorithms through a structural normalization of deltas, a universal and flexible patch application model and a clearer separation of patch and merge engine performance from delta generation performance. Moreover, this work could inspire technical approaches to combine heterogeneous engines thank to sound delta transformations. This short paper reports current results, discusses key points and outlines some perspectives.
['Jean-Yves Vion-Dury']
Diffing, patching and merging XML documents: toward a generic calculus of editing deltas.
372,740
Bootstrapping Open-Source English-Bulgarian Computational Dictionary
['Krasimir Angelov']
Bootstrapping Open-Source English-Bulgarian Computational Dictionary
611,023
In this paper we will address several fundamental concepts in the analysis of dynamic scenes and will describe a system that derives three-dimensional object representations from the multiple views provided by dynamic images. The two fundamental concepts are that of the correspondence problem (1) and occlusion analysis (2). The former involves the process which is necessary to span the discontinuity inherent in image sequence representations of dynamic scenes. A detailed description of a dynamic scene analysis system (3) is presented with special emphasis on the two major goals pursued in the work, namely, to lessen the dependence on feature point measurements in a structure from motion system and to develop a descriptive three-dimensional object representation that is suitable for dynamic scene analysis systems. The results are a structure from occluding contours system and a "volume segment" representation scheme.
['Worthy N. Martin', 'Jake K. Aggarwal']
Dynamic scenes and object descriptions
159,850
This paper presents algorithms for program abstraction based on the principle of loop summarization, which, unlike traditional program approximation approaches (e.g., abstract interpretation), does not employ iterative fixpoint computation, but instead computes symbolic abstract transformers with respect to a set of abstract domains. This allows for an effective exploitation of problem-specific abstract domains for summarization and, as a consequence, the precision of an abstract model may be tailored to specific verification needs. Furthermore, we extend the concept of loop summarization to incorporate relational abstract domains to enable the discovery of transition invariants, which are subsequently used to prove termination of programs. Well-foundedness of the discovered transition invariants is ensured either by a separate decision procedure call or by using abstract domains that are well-founded by construction.#R##N##R##N#We experimentally evaluate several abstract domains related to memory operations to detect buffer overflow problems. Also, our light-weight termination analysis is demonstrated to be effective on a wide range of benchmarks, including OS device drivers.
['Daniel Kroening', 'Natasha Sharygina', 'Stefano Tonetta', 'Aliaksei Tsitovich', 'Christoph M. Wintersteiger']
Loop summarization using state and transition invariants
118,438
This paper presents two novel sorting network-based architectures for computing high sample rate nonrecursive rank order filters. The proposed architectures consist of significantly fewer comparators than existing sorting network-based architectures that are based on bubble-sort and Batcher's odd-even merge sort. The reduction in the number of comparators is obtained by sorting the columns of the window only once, and by merging the sorted columns in a way such that the number of candidate elements for the output is very small. The number of comparators per output is reduced even further by processing a block of outputs at a time. Block processing procedures that exploit the computational overlap between consecutive windows are developed for both the proposed networks. >
['Chaitali Chakrabarti', 'Li-Yu Wang']
Novel sorting network-based architectures for rank order filters
378,540
Open architecture networks provide applications with fine-grained control over network elements. With this control comes the risk of misuse and new challenges to security beyond those present in conventional networks. One particular security requirement is the ability of applications to protect the secrecy and integrity of transmitted data while still allowing trusted active elements within the network to operate on that data. This paper describes mechanisms for identifying trusted nodes within a network and securely deploying adaptation instructions to those nodes while protecting application data from unauthorized access and modification. Promising experimental results of our implementation within the conductor adaptation framework are also presented, suggesting that such features can be incorporated into real networks.
['Jun Li', 'Mark D. Yarvis', 'Peter L. Reiher']
Securing distributed adaptation
242,170
Given the statistically multiplexed stream observations of two independent and different types of traffic streams, this paper examines the problem of determining the degree of mixing. In data networks a common example of such a pair of different stream would be one conforming to the traditional Poisson model with an exponential inter-arrival distribution and the other obeying long-range dependent traffic characterized by a heavy-tailed distribution. The paper provides an expression for the probability density function of the inter-arrival time of the mixed stream in terms of those of the input streams for the general case. As an example, we consider a mixed output traffic stream for the specific case of multiplexed Poisson and heavy-tailed processes and an approach is provided to estimate input parameters from the first and second order statistics. For arrival rate estimation of the input streams, we propose a look-up table approach based on nearest neighbor search. The simulation results demonstrate that the estimated arrival rates and the moments are indeed close to their respective true values.
['Rajesh Narasimha', 'Raghuveer M. Rao', 'Sohail A. Dianat']
Estimation of parameters of input traffic streams from statistically multiplexed output
97,099
Learning Automata are stochastic decision-making machines that have been widely used in classification, control, and network routing, between others. Despite their versatility, one of the main drawbacks of these models is the low con- vergence rate of the learning rules used for the training. Estimator algorithms such as Pursuit schemes help to overcome this limitation, although they require a high computer memory cost for their operation. This fact becomes a serious inconvenient when a large set of learning automata collaborate in a team to solve a concrete task, since the memory requirements of these algorithms increases exponentially. In these cases, Pursuit algorithms are ineffective due to memory overflow. In this work, we address this problem and we propose an estimator algorithm that can be used to train large teams of Learning Automata. The approach uses a similar strategy to Tabu Search algorithms to manage long and short term memory, in order to reduce the memory requirements. The method is applied in classic permutation problems as a test-bed.
['Manuel P. Cuéllar', 'María Ros', 'Miguel Delgado', 'M. Amparo Vila']
An Estimator Update Scheme for Large Teams of Learning Automata
638,223
The Internet in India: better times ahead?
['Grey E. Burkhart', 'Seymour E. Goodman', 'Arun Mehta']
The Internet in India: better times ahead?
22,224
This paper presents a new checkpoint scheme that utilizes the memory usage profile and time series analysis for low-overhead checkpoint. The proposed checkpoint scheme checks current and future checkpoint overhead based on the on the changes of the memory size and the expected checkpoint overhead using memory profile and adaptive time series analysis when it decides whether or not to take a checkpoint. Unlike the previous works that do not utilize the memory usage profile, it is possible to reduce the total overhead of the execution time. We also present experimental results which show that the checkpoint overhead of the proposed scheme is reduced compared with the previously developed checkpoint scheme.
['Jiman Hong', 'Sang-Su Kim', 'Yookun Cho', 'Heon Young Yeom', 'Taesoon Park']
On the choice of checkpoint interval using memory usage profile and adaptive time series analysis
464,669
With the advent of cloud computing model, distributed caches have become the cornerstone for building scalable applications. Popular systems like Facebook [1] or Twitter use Memcached [5], a highly scalable distributed object cache, to speed up applications by avoiding database accesses. Distributed object caches assign objects to cache instances based on a hashing function, and objects are not moved from a cache instance to another unless more instances are added to the cache and objects are redistributed. This may lead to situations where some cache instances are overloaded when some of the objects they store are frequently accessed, while other cache instances are less frequently used. In this paper we propose a multi-resource load balancing algorithm for distributed cache systems. The algorithm aims at balancing both CPU and Memory resources among cache instances by redistributing stored data. Considering the possible conflict of balancing multiple resources at the same time, we give CPU and Memory resources weighted priorities based on the runtime load distributions. A scarcer resource is given a higher weight than a less scarce resource when load balancing. The system imbalance degree is evaluated based on monitoring information, and the utility load of a node, a unit for resource consumption. Besides, since continuous rebalance of the system may affect the QoS of applications utilizing the cache system, our data selection policy ensures that each data migration minimizes the system imbalance degree and hence, the total reconfiguration cost can be minimized. An extensive simulation is conducted to compare our policy with other policies. Our policy shows a significant improvement in time efficiency and decrease in reconfiguration cost.
['Yu Jia', 'Ivan Brondino', 'Ricardo Peris', 'Marta Martínez', 'Dianfu Ma']
A multi-resource load balancing algorithm for cloud cache systems
78,372