abstract
stringlengths
8
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
13
1,000k
Previous research in STEM education demonstrates that students are engaged in a continuous process of identity development, trying to integrate their educational experiences with their perception of who they are, and who they wish to become. It appears increasingly apparent from this body of research that students are not well supported in this process by the education they currently receive. The goal of this paper is to analyse a specific aspect of the student experience, participation, in order to gain a better understanding of how computer science (CS) and information technology (IT) students engage with CS prior to and during their studies. Drawing on student interview data we describe and discuss students' qualitatively different ways of experiencing participation in CS and IT. The notion of participation applied here is inspired by Wenger's notion of participation in his social theory of learning. A phenomenographic analysis identifies a spectrum of qualitatively distinct ways in which the students experience participation in CS and IT, ranging from "using", to participation as "continuous development", and "creating new knowledge".
['Anne-Kathrin Peters', 'Anders Berglund', 'Anna Eckerdal', 'Arnold Pears']
First Year Computer Science and IT Students' Experience of Participation in the Discipline
6,187
In case of emergency, we need to grasp the situation and make correct assessment quickly. The video data taken from monitoring cameras are important information in the emergent situation. In our research, we built a support system for identifying video data of monitoring cameras. The system collects video data from the net-cameras through the Internet and deals with them as spatiotemporal data. To search video data, we proposed an intuitive search method for users. In the method, we assume a virtual wall in the city. This method allows users to search for the video data, which is recorded during a period, and records a virtual wall in a certain direction. The users set the above search-key on the map, and it will be transformed to the search-key for the spatiotemporal access structure to proceed the search. When the system shows the search result, the video data, which is easy to understand the whole scene, will be displayed first. As the horizontal direction of the screen corresponds to the virtual wall's direction in the real world, the user can narrow down the search result in the horizontal direction of the screen. We built a prototype in Java, Javaservlet and C++.
['Yiqun Wang', 'Yoshinori Hijikata', 'Shogo Nishida']
Designing a video data management system for monitoring cameras with intuitive interface
12,919
More Anonymity through Trust Degree in Trust-Based Onion Routing
['Peng Zhou', 'Xiapu Luo', 'Rocky K. C. Chang']
More Anonymity through Trust Degree in Trust-Based Onion Routing
14,360
Analytical frequency-domain expressions for single and coupled transmission lines with triangular input waveforms are first developed. The inverse Fourier transform is then used to obtain an expression for the time-domain triangle impulse responses for frequency-independent transmission line parameters. The integral associated with the inverse Fourier transform is solved analytically using a differential-equation-based technique. Closed-form expressions for the triangle impulse responses are given in the form of incomplete Lipschitz-Hankel integrals (ILHI) of the first kind. The ILHI can be efficiently calculated using existing algorithms. Combining these closed-form expressions for the triangle impulse responses with a time-domain convolution method using a triangle impulse as a basis function, provides an accurate and efficient simulation method for very lossy transmission lines embedded within linear and nonlinear circuits.
['Tingdong Zhou', 'Steven L. Dvorak', 'John L. Prince']
Lossy transmission line simulation based on closed-form triangle impulse responses
424,255
This paper proposes a framework for verifying component-based software in the context of component evolution. This framework includes two stages: modular conformance testing for updating inaccurate model of the evolved component and modular verification for evolving component-based software. When a component is evolved after adapting some refinements, the proposed framework only focuses on this component and its model in order to update the model and recheck the whole evolved system. The framework also reuses the previous verification results and the previous models of the evolved component to reduce several steps of the model update and verification processes.
['Pham Ngoc Hung', 'Takuya Katayama']
Modular Conformance Testing and Assume-Guarantee Verification for Evolving Component-Based Software
466,836
This paper discusses preliminary investigations on the monitorability of contracts for web service descriptions. There are settings where servers do not guarantee statically whether they satisfy some specified contract, which forces the client (i.e., the entity interacting with the server) to perform dynamic checks. This scenario may be viewed as an instance of Runtime Verification, where a pertinent question is whether contracts can be monitored for adequately at runtime, otherwise stated as the monitorability of contracts. We consider a simple language of finitary contracts describing both clients and servers, and develop a formal framework that describes server contract monitoring. We define monitor properties that potentially contribute towards a comprehensive notion of contract monitorability and show that our simple contract language satisfies these properties.
['Annalizz Vella', 'Adrian Francalanza']
Preliminary Results Towards Contract Monitorability
741,380
In this paper, we propose a secure handoff scheme for the integration of UMTS and 802.11 WLAN networks. The handoff between 802.11 WLAN and the UMTS has some drawbacks and could be hijacked through middle of a communication session. An architecture built for a secure handoff scheme is proposed to fix that problem. The Dynamic Key Exchange Protocol (DKEP) is used to protect users during a UMTS handover to a 802.11 WLAN environment. The mobile station (MS) and access point (AP) compute their session key individually. The protocol includes three phases and all the steps of the phases are protected by public-key encryption. Therefore no information can be hijacked between MS and AP. From the security analysis, we know that the handoff between WLAN and UMTS is guaranteed in various aspects. For example, user identity and new registration can be protected, thus avoiding denial of service, key reuse, and so on.
['Yen-Chieh Ouyang', 'Chung-Hua Chu']
Short Paper: A Secure Interworking Scheme for UMTS-WLAN
328,458
Methods of Computational Fluid Dynamics are applied to simulate pulsatile blood flow in human vessels and in the aortic arch. The non-Newtonian behaviour of the human blood is investigated in simple vessels of actual size. Turbulence effects are taken into account. A detailed time-dependent mathematical convergence test has been carried out. The realistic pulsatile flow is used in all simulations. Results of computer simulations of the blood flow in vessels of two different geometries are presented. For pressure, strain rate and velocity component distributions we found significant disagreements between our results obtained with realistic non-Newtonian treatment of human blood and widely used method in literature: a simple Newtonian approximation. A significant increase of the strain rate and, as a result, wall sear stress distribution, is found in the region of the aortic arch. We consider this result astheoretical evidence that supports existing clinical observations and those models not using non-Newtonian treatment underestimate the risk of disruption to the human vascular system.
['Renat A. Sultanov', 'Dennis Guster', 'Brent Engelbrekt', 'Richard Blankenbecler']
3D Computer Simulations of Pulsatile Human Blood Flows in Vessels and in the Aortic Arch: Investigation of Non-Newtonian Characteristics of Human Blood
352,410
In this article we study hardware-oriented versions of the recently appeared Layered ORthogonal lattice Detector (LORD) and Turbo LORD (T-LORD). LORD and T-LORD are attractive Multiple-Input Multiple-Output (MIMO) detection algorithms, that aim to approach the optimal Maximum-Likelihood (ML) and Maximum-A-Posteriori (MAP) performance, respectively, yet allowing a complexity quadratic in the number of transmitting antennas rather than exponential. LORD and T-LORD are also well suited to a hardware (e.g., ASIC or FPGA) implementation because of their regularity, parallelism, deterministic latency, and complexity. Nevertheless, their complexity is still high in case of high cardinality constellations, such as the 64-QAM foreseen by the 802.11n standard. We show that, when only global latency constraints exist, e.g., a fixed time to detect the whole OFDM symbol, the LORD and T-LORD complexity can be remarkably reduced, still approaching the ML and MAP performance, respectively. Notwithstanding the suboptimal low-complexity and hardware-oriented implementation, LORD and T-LORD approach the EXtrinsic Information Transfer of the ML and MAP detectors, respectively. To focus on a specific setting, we consider the indoor MIMO wireless LAN 802.11n standard, taking into account errors in channel estimation and a frequency selective, spatially correlated channel model.
['Alessandro Tomasoni', 'Massimiliano Siti', 'Marco Ferrari', 'Sandro Bellini']
Hardware oriented, quasi-optimal detectors for iterative and non-iterative MIMO receivers
113,722
A very high-level trace for data structures is one which displays a data structure in the shape in which the user conceptualizes it, be it a tree, an array, or a graph. GRAPHTRACE is a system that facilitates the very high-level graphic display of interrelationships among dynamically allocated Pascal records. It offers the user a wide range of options to enable him to "see" the data structures on a graphics screen in a format as close as possible to that in which he visualizes it, thereby providing a useful display capability when the user's conceptual model is a directed graph or tree.
['Sidney L. Getz', 'George Kalligiannis', 'Stephen R. Schach']
A Very High-Level Interactive Graphical Trace for the Pascal Heap
23,104
Maximum entropy models have become popular statistical models in neuroscience and other areas in biology and can be useful tools for obtaining estimates of mutual information in biological systems. However, maximum entropy models fit to small data sets can be subject to sampling bias; i.e., the true entropy of the data can be severely underestimated. Here, we study the sampling properties of estimates of the entropy obtained from maximum entropy models. We focus on pairwise binary models, which are used extensively to model neural population activity. We show that if the data is well described by a pairwise model, the bias is equal to the number of parameters divided by twice the number of observations. If, however, the higher order correlations in the data deviate from those predicted by the model, the bias can be larger. Using a phenomenological model of neural population recordings, we find that this additional bias is highest for small firing probabilities, strong correlations and large population sizes—for the parameters we tested, a factor of about four higher. We derive guidelines for how long a neurophysiological experiment needs to be in order to ensure that the bias is less than a specified criterion. Finally, we show how a modified plug-in estimate of the entropy can be used for bias correction.
['Jakob H. Macke', 'Iain Murray', 'Peter E. Latham']
Estimation Bias in Maximum Entropy Models
458,291
Motivation: Secondary structure prediction of RNA sequences is an important problem. There have been progresses in this area, but the accuracy of prediction from an RNA sequence is still limited. In many cases, however, homologous RNA sequences are available with the target RNA sequence whose secondary structure is to be predicted.#R##N##R##N#Results: In this article, we propose a new method for secondary structure predictions of individual RNA sequences by taking the information of their homologous sequences into account without assuming the common secondary structure of the entire sequences. The proposed method is based on posterior decoding techniques, which consider all the suboptimal secondary structures of the target and homologous sequences and all the suboptimal alignments between the target sequence and each of the homologous sequences. In our computational experiments, the proposed method provides better predictions than those performed only on the basis of the formation of individual RNA sequences and those performed by using methods for predicting the common secondary structure of the homologous sequences. Remarkably, we found that the common secondary predictions sometimes give worse predictions for the secondary structure of a target sequence than the predictions from the individual target sequence, while the proposed method always gives good predictions for the secondary structure of target sequences in all tested cases.#R##N##R##N#Availability: Supporting information and software are available online at: http://www.ncrna.org/software/centroidfold/ismb2009/.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information:Supplementary data are available at Bioinformatics online.
['Michiaki Hamada', 'Kengo Sato', 'Hisanori Kiryu', 'Toutai Mituyama', 'Kiyoshi Asai']
Predictions of RNA secondary structure by combining homologous sequence information
131,466
Many problems arising in dealing with high-dimensional data sets involve connection graphs in which each edge is associated with both an edge weight and a d-dimensional linear transformation. We consider vectorized versions of the PageRank and effective resistance which can be used as basic tools for organizing and analyzing complex data sets. For example, the generalized PageRank and effective resistance can be utilized to derive and modify diffusion distances for vector diffusion maps in data and image processing. Furthermore, the edge ranking of the connection graphs determined by the vectorized PageRank and effective resistance are an essential part of sparsification algorithms which simplify and preserve the global structure of connection graphs.
['Fan R. K. Chung', 'Wenbo Zhao']
Ranking and sparsifying a connection graph
547,367
This paper is concerned with the problem of relay assignment in cooperative wireless networks having multiple sources, multiple relays, and a single destination. With the objective of maximizing the minimum capacity among all sources in the network, a mixed-integer linear programming (MILP) problem is formulated, which can be solved by standard branch-and-bound algorithms. To reduce computational complexity, a greedy solution in the form of a lexicographic bottleneck assignment algorithm is proposed. Simulation results obtained for the IEEE 802.16j uplink scenarios show that the greedy algorithm performs very close to the optimal solution but at a much lower computational cost. The proposed greedy solution can also be tailored to provide further improvements on other network performance criteria.
['Tung T. Pham', 'Ha H. Nguyen', 'Hoang Duong Tuan']
Relay Assignment for Max-Min Capacity in Cooperative Wireless Networks
81,124
Topic modeling for information retrieval (IR) has attracted significant attention and demonstrated good performance in a wide variety of tasks over the years. In this paper, we first present a comprehensive comparison of various topic modeling approaches, including the so-called document topic models (DTM) and word topic models (WTM), for Chinese spoken document retrieval (SDR). Moreover, different granularities of index features, including words, subword units, and their combinations, are also exploited to work in conjunction with various extensions of topic modeling presented in this paper, so as to alleviate SDR performance degradation caused by speech recognition errors. All of the experiments were performed on the TDT Chinese collection.
['Shih-Hsiang Lin', 'Berlin Chen']
A Comparative Study of Methods for Topic Modeling in Spoken Document Retrieval
746,759
A 65nm energy-efficient power management (PM) with frequency-based control (FBC) is proposed for achieving dynamic voltage scaling (DVS) on-the-fly in SoC system. Both DVS and dynamic frequency scaling (DFS) operation are the main specifications for system processors. The controlled loop of the proposed single-inductor dual-output (SIDO) power module is merged with the frequency-controlled phase-lock-loop (PLL) aims to simultaneously achieve both DVS and DFS operations, and thereby is not affected by process, supply voltage, and temperature (PVT) variation. In spite of using complex power processor control scheme, the proposed DVS on-the-fly power module can directly receive the demand from system processor. Experimental results show that the SIDO power module can realize a peak efficiency of 90 %, and the system processor power can be reduced by 33 % with the on-the-fly operation under the different temperature conditions.
['Yu-Huei Lee', 'Chao-Chang Chiu', 'Ke-Horng Chen', 'Ying-Hsi Lin', 'Chen-Chih Huang']
On-the-fly dynamic voltage scaling (DVS) in 65nm energy-efficient power management with frequency-based control (FBC) for SoC system
28,627
Power-Assist Chair Using Pneumatic Actuator
['Kazushi Sanada', 'Yuki Akiyama']
Power-Assist Chair Using Pneumatic Actuator
976,366
Recently, there has been a lot of interest in the integration of Description Logics (DL) and rules on the Semantic Web. We define guarded hybrid knowledge bases (or g-hybrid knowledge bases) as knowledge bases that consist of a Description Logic knowledge base and a guarded logic program, similar to the $\mathcal{DL}$ + log knowledge bases from Rosati (In Proceedings of the 10th International Conference on Principles of Knowledge Representation and Reasoning, AAAI Press, Menlo Park, CA, 2006, pp. 68–78.). g-Hybrid knowledge bases enable an integration of Description Logics and Logic Programming where, unlike in other approaches, variables in the rules of a guarded program do not need to appear in positive non-DL atoms of the body, i.e., DL atoms can act as guards as well. Decidability of satisfiability checking of g-hybrid knowledge bases is shown for the particular DL $\mathcal{DLRO}^{\-{le}}$ , which is close to OWL DL, by a reduction to guarded programs under the open answer set semantics. Moreover, we show 2-Exptime-completeness for satisfiability checking of such g-hybrid knowledge bases. Finally, we discuss advantages and disadvantages of our approach compared with $\mathcal{DL}$ + log knowledge bases.
['Stijn Heymans', 'Jos de Bruijn', 'Livia Predoiu', 'Cristina Feier', 'Davy Van niewenborgh']
Guarded hybrid knowledge bases12
254,299
Foreword to the special issue on Nonstandard Applications of Computer Algebra (ACA'2013)
['Francisco Botana', 'Antonio Hernando', 'Eugenio Roanes-Lozano', 'Michael J. Wester']
Foreword to the special issue on Nonstandard Applications of Computer Algebra (ACA'2013)
696,689
Analysing and monitoring Social Networking activities raise multiple challenges for the evolution of Service Oriented Systems Engineering. This is particularly evident for event detection in social networks and, more in general, for large-scale Social Analytics, which require continuous processing of data. In this paper we present a service oriented framework exploring effective ways to leverage the opportunities coming from innovations and evolutions in computational power, storage, and infrastructures, with particular focus on modern architectures including in-memory database technology, in-database computation, massive parallel processing, Open Data Services, and scalability with multi-node clusters in Cloud. A prototype of this system was experimented in the contest of a specific kind of social event, an art exhibition of sculptures, where the system collected and analyzed in real-time the tweets issued in an entire region, including exhibition sites, and continuously updated analytical dashboards placed in one of the exhibition rooms.
['Angelo Chianese', 'Francesco Piccialli']
A Service Oriented Framework for Analysing Social Network Activities
892,915
I consider a cheap-talk model in which a firm has a chance to communicate its product quality to consumers. The model describes how advertising can be both informative to consumers and profitable for the firm through its content in a vertically differentiated market. I find that advertising content may be effective in inducing search even if incentives for misrepresentation exist. In particular, a firm with an undesirable low-quality product is able to attract consumers who would have not incurred a search cost had they known its true quality. In this case, a semiseparating equilibrium occurs where the lowest firm types pool upward in order to increase the expected product quality while simultaneously signaling that the product is affordable. Although consumers always benefit from truth in advertising, total welfare may decrease if an undesirable firm is required to reveal its type. Finally, I show that the extent to which misrepresentation can take place increases with the cost of advertising coverage.
['Pedro M. Gardete']
Cheap-Talk Advertising and Misrepresentation in Vertically Differentiated Markets
214,577
This paper studies projected Barzilai-Borwein (PBB) methods for large-scale box-constrained quadratic programming. Recent work on this method has modified the PBB method by incorporating the Grippo-Lampariello-Lucidi (GLL) nonmonotone line search, so as to enable global convergence to be proved. We show by many numerical experiments that the performance of the PBB method deteriorates if the GLL line search is used. We have therefore considered the question of whether the unmodified method is globally convergent, which we show not to be the case, by exhibiting a counter example in which the method cycles. A new projected gradient method (PABB) is then considered that alternately uses the two Barzilai-Borwein steplengths. We also give an example in which this method may cycle, although its practical performance is seen to be superior to the PBB method. With the aim of both ensuring global convergence and preserving the good numerical performance of the unmodified methods, we examine other recent work on nonmonotone line searches, and propose a new adaptive variant with some attractive features. Further numerical experiments show that the PABB method with the adaptive line search is the best BB-like method in the positive definite case, and it compares reasonably well against the GPCG algorithm of More and Toraldo. In the indefinite case, the PBB method with the adaptive line search is shown on some examples to find local minima with better solution values, and hence may be preferred for this reason.
['Yu-Hong Dai', 'Roger Fletcher']
Projected Barzilai-Borwein methods for large-scale box-constrained quadratic programming
1,058
Many successful applications of computer vision to image or video manipulation are interactive by nature. However, parameters of such systems are often trained neglecting the user. Traditionally, interactive systems have been treated in the same manner as their fully automatic counterparts. Their performance is evaluated by computing the accuracy of their solutions under some fixed set of user interactions. In this paper, we study the problem of evaluating and learning interactive segmentation systems which are extensively used in the real world. The key questions in this context are how to measure (1) the effort associated with a user interaction, and (2) the quality of the segmentation result as perceived by the user. We conduct a user study to analyze user behavior and answer these questions. Using the insights obtained from these experiments, we propose a framework to evaluate and learn interactive segmentation systems which brings the user in the loop. The framework is based on the use of an active robot user--a simulated model of a human user. We show how this approach can be used to evaluate and learn parameters of state-of-the-art interactive segmentation systems. We also show how simulated user models can be integrated into the popular max-margin method for parameter learning and propose an algorithm to solve the resulting optimisation problem.
['Pushmeet Kohli', 'Hannes Nickisch', 'Carsten Rother', 'Christoph Rhemann']
User-Centric Learning and Evaluation of Interactive Segmentation Systems
205,976
In this paper, we propose a novel face descriptor for face recognition, named Local Line Derivative Pattern (LLDP). High-order derivative images in two directions are obtained by convolving original images with Sobel Masks. A revised binary coding function is proposed and three standards on arranging the weights are also proposed. Based on the standards, the weights of a line neighborhood in two directions are arranged. The LLDP labels in two directions are calculated with the proposed binary coding function and weights. The labeled image is divided into blocks where spatial histograms are extracted separately and concatenated into an entire histogram as features for recognition. The experiments on the FERET and Extended Yale B show superior performances of the proposed LLDP compared to other existing methods based on the LBP. The results prove that the LLDP has good robustness against expression, illumination and aging variations.
['Zhichao Lian', 'Meng Joo Er', 'Yang Cong']
Local Line Derivative Pattern for face recognition
497,139
The application of e-learning at Croatian universities has increased rapidly, with the introduction of the Bologna process to create the European Higher Education Area. The application of digital media for teaching and learning makes distance education for LIS professionals at the Faculty of Humanities and Social Sciences, University of Zagreb, possible. E-learning and traditional classroom learning have been combined to deliver library and information science (LIS) education. The aim of our research was to obtain a general overview of the LIS graduates' expectations and experiences using Omega, a specific learning management system
['Marko Tot', 'Daniela Zivkovic']
The role of e-learning in LIS education: Students' evaluations
347,086
In this paper, we discuss a reconfigurable tree-connected multiprocessor system and its arraylike layout. Each level in our tree consists of several blocks with PEs. The reconfiguration is executed for each block by shifting PEs to the right. It is valid if the number of faulty PEs in each block is less than or equal to that of spare ones in it. We introduce a 7/spl times/7 square module with a five-level tree to simplify the arraylike layout. The system with six or more levels is constructed easily by arranging several modules regularly. The comparison with other trees layoutable in planar arrays shows that our tree is superior to others in maximum interconnection length.
['Sumito Nakano', 'Naotake Kamiura', 'Yutaka Hata']
Fault tolerance of a tree-connected multiprocessor system and its arraylike layout
427,939
Review of: Orna, Elisabeth. Making knowledge visible: communicating knowledge through information products.. Aldershot: Gower, 2005.
['Elena Maceviciute']
Review of: Orna, Elisabeth. Making knowledge visible: communicating knowledge through information products.. Aldershot: Gower, 2005.
761,889
A significant challenge faced by the mobile application industry is developing and maintaining multiple native variants of mobile applications to support different mobile operating systems, devices and varying application functional requirements. The current industrial practice is to develop and maintain these variants separately. Any potential change has to be applied across variants manually, which is neither efficient nor scalable. We consider the problem of supporting multiple platforms as a ‘software product-line engineering’ problem. The paper proposes a novel application of product-line model-driven engineering to mobile application development and addresses the key challenges of feature-based native mobile application variants for multiple platforms. Specifically, we deal with three types of variations in mobile applications: variation due to operation systems and their versions, software and hardware capabilities of mobile devices, and functionalities offered by the mobile application. We develop a tool MOPPET that automates the proposed approach. Finally, the results of applying the approach on two industrial case studies show that the proposed approach is applicable to industrial mobile applications and have potential to significantly reduce the development effort and time.
['Muhammad Usman', 'Muhammad Zohaib Z. Iqbal', 'Muhammad Uzair Khan']
A product-line model-driven engineering approach for generating feature-based mobile applications
900,759
With the change to Web2.0 Internet environmentand the popularization of blogs, people write various writings freely on the Internet such as daily events, movie reviews, news, etc. Here, we can find writings on personal experiences from the past or on past popular culture. This paper analyzed collective memory through the blogosphere. We define blog posts that qualify as containing collective memory, find the types of collective memory through analyzing comments and trackbacks, and observe how people share their memories and interact with others. The results of the analysis show that blog posts that form a collective memory have more interactions than regular blog posts and that the types of comments include those that provide information, exchange information, and reminisce. Through these results, we derive a model for collective memory and show a prototype of a system based on the model. This study has a significance in that it rediscovers memory activities of people through the analysis of the collective memory phenomenon on the Internet.
['Young Sik Kim', 'Kibeom Lee', 'Steve SangKi Han']
Study on Collective Memory in the Blogosphere
490,879
Unique interactions between soil and communication components in wireless underground communications necessitate revisiting fundamental communication concepts from a different perspective. In this paper, capacity profile of wireless underground (UG) channel for multi-carrier transmission techniques is analyzed based on empirical antenna return loss and channel frequency response models in different soil types and moisture values. It is shown that data rates in excess of 124 Mbps are possible for distances up to 12 m. For shorter distances and lower soil moisture conditions, data rates of 362 Mbps can be achieved. It is also shown that due to soil moisture variations, UG channel experiences significant variations in antenna bandwidth and coherence bandwidth, which demands dynamic subcarrier operation. Theoretical analysis based on this empirical data show that by cyber-physical adaption to soil moisture variations, 180% improvement in channel capacity is possible when soil moisture decreases. It is shown that compared to a fixed bandwidth system; soil-based, system and sub-carrier bandwidth adaptation leads to capacity gains of 56%-136%. The analysis is based on indoor and outdoor experiments with more than 1,500 measurements taken over a period of 10 months. These semi-empirical capacity results provide further evidence on the potential of underground channel as a viable media for high data rate communication and highlight potential improvements in this area.
['Abdul Salam', 'Mehmet C. Vuran']
Impacts of Soil Type and Moisture on the Capacity of Multi-Carrier Modulation in Internet of Underground Things
885,733
A new algorithm based on Genetic Programming (GP) for the problem of optimization of Multiple Constant Multiplication (MCM) by Common Subexpression Elimination (CSE) is developed. This method is used for hardware optimization of DSP systems. A solution based on GP is shown in this paper. The performance of the technique is demonstrated in one- and multi-dimensional digital filters with constant coefficients.
['H. Safiri', 'Majid Ahmadi', 'Graham A. Jullien', 'William C. Miller']
A new algorithm for the elimination of common subexpressions in hardware implementation of digital filters by using genetic programming
32,611
Exploiting the emerging reality of affordable multi-core architectures goes through providing programmers with simple abstractions that would enable them to easily turn their sequential programs into concurrent ones that expose as much parallelism as possible. While transactional memory promises to make concurrent programming easy to a wide programmer community, current implementations either disallow nested transactions to run in parallel or do not scale to arbitrary parallel nesting depths. This is an important obstacle to the central goal of transactional memory, as programmers can only start parallel threads in restricted parts of their code. This paper addresses the intrinsic difficulty behind the support for parallel nesting in transactional memory, and proposes a novel solution that, to the best of our knowledge, is the first practical solution to meet the lowest theoretical upper bound known for the problem. Using a synthetic workload configured to test parallel transactions on a multi-core machine, a practical implementation of our algorithm yields substantial speed-ups (up to 22x with 33 threads) relatively to serial nesting, and shows that the time to start and commit transactions, as well as to detect conflicts, is independent of nesting depth.
['João Pedro Barreto 0002', 'Aleksandar Dragojevic', 'Paulo Ferreira', 'Rachid Guerraoui', 'Michal Kapalka']
Leveraging parallel nesting in transactional memory
23,410
This study aims to investigate the structural relationships between secondary school teachers' TPACK, perception of school support for technology use, technostress, and intention to use technology in Korea, where a SMART education initiative has been announced recently for K-12 education. The study employed structural equation modeling in order to examine the causal relationships among the variables, and data from 312 secondary school teachers were analyzed. The results indicated that TPACK and school support had significant effects on technostress. In addition, technostress significantly influenced teachers' intentions to use technology. Lastly, technostress significantly mediated TPACK, school support and the intention to use technology. TPACK and school support influenced secondary school teachers' technostress.Technostress influenced teachers' intentions to use technology.Technostress mediated TPACK, school support and the intention to use technology.TPACK did not show a significant effect on intention to use technology.Teachers' use of technology may not depend entirely on their knowledge of content and technology.
['Young Ju Joo', 'Kyu Yon Lim', 'Nam Hee Kim']
The effects of secondary teachers' technostress on the intention to use technology in South Korea
590,153
The main purpose of this paper is to revisit the proximal point algorithms with over-relaxed A A -maximal m m -relaxed monotone mappings for solving variational inclusions in Hilbert spaces without Lipschitz continuity requirement to overcome the drawbacks of the paper (Verma, 2009) [5] . We affirmatively answer the open question mentioned in the paper (Huang and Noor, 2012) [6] .
['Zhenyu Huang', 'Muhammad Aslam Noor']
Revisit the over-relaxed proximal point algorithm ☆
830,853
In resource buying games a set of players jointly buys a subset of a finite resource set E (e.g., machines, edges, or nodes in a digraph). The cost of a resource e depends on the number (or load) of players using e, and has to be paid completely by the players before it becomes available. Each player i needs at least one set of a predefined family ${\mathcal S}_i\subseteq 2^E$ to be available. Thus, resource buying games can be seen as a variant of congestion games in which the load-dependent costs of the resources can be shared arbitrarily among the players. A strategy of player i in resource buying games is a tuple consisting of one of i's desired configurations $S_i\in{\mathcal S}_i$ together with a payment vector $p_i\in{\mathbb R}^E_+$ indicating how much i is willing to contribute towards the purchase of the chosen resources. In this paper, we study the existence and computational complexity of pure Nash equilibria (PNE, for short) of resource buying games. In contrast to classical congestion games for which equilibria are guaranteed to exist, the existence of equilibria in resource buying games strongly depends on the underlying structure of the families ${\mathcal S}_i$ and the behavior of the cost functions. We show that for marginally non-increasing cost functions, matroids are exactly the right structure to consider, and that resource buying games with marginally non-decreasing cost functions always admit a PNE.
['Tobias Harks', 'Britta Peis']
Resource buying games
547,098
We first review the notion of isochrons for oscillators, which has been developed and heavily utilized in mathematical biology in studying biological oscillations. Isochrons were instrumental in introducing a notion of generalized phase for an oscillation and form the basis for oscillator perturbation analysis formulations. Calculating the isochrons of an oscillator is a very difficult task. Except for some very simple planar oscillators, isochrons can not be calculated analytically and one has to resort to numerical techniques. Previously proposed numerical methods for computing isochrons can be regarded as brute-force, which become totally impractical for non-planar oscillators with dimension more than two. In this paper, we present a precise and carefully developed theory and advanced numerical techniques for computing local but quadratic approximations for isochrons. Previous work offers the theory and the numerical methods needed for computing only linear approximations for isochrons. Our treatment is general and applicable to oscillators with large dimension. We present examples for isochron computations, verify our results against exact calculations in a simple case, and allude to several applications among many where quadratic approximations of isochrons will be of use.
['Onder Suvak', 'Alper Demir']
Computing quadratic approximations for the isochrons of oscillators: a general theory and advanced numerical methods
368,097
Comments on a Cryptosystem Proposed by Wang and Hu
['R. Durán Díaz', 'L. Hernández Encinas', 'J. Muñoz Masqué']
Comments on a Cryptosystem Proposed by Wang and Hu
518,063
Imitation is an important aspect of emotion recognition. We present an expression training interface which evaluates the imitation of facial expressions and head movements. The system provides feedback on complex emotion expression, via an integrated emotion classifier which can recognize 18 complex emotions. Feedback is also provided for exact-expression imitation via dynamic time warping. Discrepancies in intensity and frequency of action units are communicated via simple graphs. This work has applications as a training tool for customer-facing professionals and people with Autism Spectrum Conditions.
['Andra Adams', 'Peter Robinson']
Expression training for complex emotions using facial expressions and head movements
560,908
textabstractA new version of the SSS* algorithm for searching game trees is presented. This algorithm is built around two recursive procedures. It finds the minimax value of a game tree by first establishing an upper bound to this value and then successively trying in a top down fashion to tighten this bound until the minimax value has been obtained. This approach has several advantages, most notably that the algorithm is more perspicuous. Correctness and several other properties of SSS* can now more easily be proven. As an example we prove Pearl's characterization of the nodes visited by SSS*. Finally the new#N#algorithm is transformed into a practical version, which allows an efficient use of memory.
['Wim Pijls', 'A. de Bruin']
Another View on the SSS* Algorithm
310,420
A memory architecture with the capability of self-test and self-repair is presented. The contributions of this memory architecture are twofold. First, it incorporates self-testing and self-repairing structures in the memory. As a result, the memory chips can perform tests, locate faults, and repair themselves without any external assistance, greatly improving the functional yield and reducing the production cost. Second, the architecture uses a hierarchical organization to achieve optimal conditions for memory access time. The hierarchical organization also increases the efficiency of the self-testing and self-repairing structures. >
['Tom Chen', 'Glen Sunada']
An ultra-large capacity single-chip memory architecture with self-testing and self-repairing
85,610
Serious Games Evaluation: Processes, Models, and Concepts
['Katharina Emmerich', 'Mareike Bockholt']
Serious Games Evaluation: Processes, Models, and Concepts
900,786
Vibrato is an important factor that affects the naturalness level of a synthetic singing voice. Therefore, the analysis and modeling of vibrato parameters are studied in this paper. The vibrato parameters of those syllables segmented from recorded songs are analyzed by using short-time Fourier transform and the method of analytic signal. After the vibrato parameter values for all training syllables are extracted and normalized, they are used to train an artificial neural network (ANN) for each type of vibrato parameter. Then, these ANN models are used to generate the values of vibrato parameters. Next, these parameter values and other music information are used together to control a harmonic- plus-noise model (HNM) to synthesize Mandarin singing voice signals. With the synthetic singing voice, subjective perception tests are conducted. The results show that the singing voice synthesized with the ANN generated vibrato parameters is much increased in the naturalness level. Therefore, the combination of the ANN vibrato models and the HNM signal model is not only feasible for singing voice synthesis but also convenient to provide multiple singing voice timbres.
['Hung-Yan Gu', 'Zheng-Fu Lin']
Singing-voice Synthesis Using ANN Vibrato-parameter Models *
620,312
Wireless sensor networks (WSNs) are being increasingly deployed in office blocks or residential areas for commercial applications, such as home automation, meter reading, surveillance, among others. At these locations, the WSNs experience interference in the 2.4GHz unlicensed band due to wireless LANs (WLANs) and commercial microwave devices, leading up to 92% packet losses. In this paper, an algorithmic framework is proposed, that allows the sensor nodes to identify the type of the interferer and its operational channel, so that the former may adapt their own transmission to reduce packet losses in the network. Our proposed interference classification approach comprises of an (i) offline measurement of the spectral characteristics of the WLAN and microwave devices to obtain a reference spectrum shape, and (ii) matching the observed spectral pattern during network operation with the stored reference shape The knowledge of the interferer characteristics is then leveraged by the sensor nodes to decide their transmission channel, packet scheduling times and sleep-awake cycles. Results reveal that our approach incurs up to 50-70% energy savings in the WSN, by reducing interference related packet losses.
['Kaushik R. Chowdhury', 'Ian F. Akyildiz']
Interferer Classification, Channel Selection and Transmission Adaptation for Wireless Sensor Networks
210,276
Data-Driven Motion Pattern Segmentation in a Crowded Environments
['Jana Trojanova', 'Karel Křehnáč', 'François Brémond']
Data-Driven Motion Pattern Segmentation in a Crowded Environments
933,183
Cardiac motion analysis from B-mode ultrasound sequence is a key task in assessing the health of the heart. The paper proposes a new methodology for cardiac motion analysis based on the temporal behaviour of points of interest on the myocardium. We define a new signal called the Temporal Flow Graph (TFG) which depicts the movement of a point of interest over time. It is a graphical representation derived from a flow field and describes the temporal evolution of a point. We prove that TFG for an object undergoing periodic motion is also periodic. This principle can be utilized to derive both global and local information from a given sequence. We demonstrate this for detecting motion irregularities at the sequence, as well as regional levels on real and synthetic data. A coarse localisation of anatomical landmarks such as centres of left/right cavities and valve points is also demonstrated using TFGs.
['V S R Veeravasarapu', 'Jayanthi Sivaswamy', 'Vishanji Karani']
Cardiac Motion Analysis by Temporal Flow Graphs
722,278
Resource allocation in bandwidth-sharing networks is inherently complex: the distributed nature of resource allocation management prohibits global coordination for efficiency, i.e., aiming at full resource usage at all times. In addition, it is well recognized that resource efficiency may conflict with other critical performance measures, such as flow delay. Without a notion of optimal (or ''near-optimal'') behavior, the performance of resource allocation schemes cannot be assessed properly. In previous work, we showed that optimal workload-based (or queue-length based) strategies have certain structural properties (they are characterized by so-called switching curves), but are too complex, in general, to be determined exactly. In addition, numerically determining the optimal strategy often requires excessive computational effort. This raises the need for simpler strategies with ''near-optimal'' behavior, that can serve as a sensible bench-mark to test resource allocation strategies. We focus on flows traversing the network, sharing the resources on their common path with (independently generated) cross-traffic. Assuming exponentially distributed flow sizes, we show that in many scenarios optimizing under a fluid scaling gives a simple linear switching strategy, that accurately approximates the optimal strategy. When two nodes on the flow path are equally congested, however, fluid scaling is not appropriate, and the corresponding strategy may not even ensure stability. In such cases, we show that the appropriate scaling for efficient workload-based allocations follows a square-root law. Armed with these, we then assess the potential gain that any sophisticated strategy can achieve over standard @a-fair strategies, which are representations of common distributed allocation schemes, and confirm that @a-fair strategies perform excellently among non-anticipating policies. In particular, we can approximate the optimal policy with a weighted @a-fair strategy.
['Im Maaike Verloop', 'Rudesindo Núñez-Queija']
Assessing the efficiency of resource allocations in bandwidth-sharing networks
142,357
Against the backdrop of responsible economic development, sustainable supply chain management (SSCM) is key to achieving the sustainable development for enterprise and industry. In this regard, sustainable supplier selection is crucial in SSCM. By integrating the three dimensions of sustainability, economic, environmental and social, this paper presents a new evaluation system for supplier selection from a sustainability perspective. Specifically, we design a decision mechanism for sustainable supplier selection based on linguistic 2-tuple grey correlation degree. In this proposed mechanism, the hybrid attribute values whereby real numbers, interval numbers and linguistic fuzzy variables coexist are transformed into linguistic 2-tuples. A ranking method based on linguistic 2-tuple grey correlation degree is then presented to rank the suppliers. An application example is presented to highlight the implementation, availability and feasibility of the proposed decision making mechanism.
['Congjun Rao', 'Mark Goh', 'Junjun Zheng']
Decision Mechanism for Supplier Selection Under Sustainability
916,275
When developing multi-agent systems, the initial choice of deployment platform has a long-term impact on the project, as it many times restricts the architecture of agents, the communication protocols used, and the available services. The goal of this paper is to present the architecture of tATAmI-2.5, a framework that is able to integrate agents deployed using different environments, and communicating using different communication platforms. This framework is based on the tATAmI-2 agent development and deployment framework, which allows agents to be deployed on various communication platforms without modifying the agent code. The details of the proposed architecture are presented, including insights into the bootstrap process and the routing of messages.
['Andrei Olaru', 'Adina Magda Florea']
A Framework for Integrating Heterogeneous Agent Communication Platforms
675,129
In this letter, we propose a new statistical model, two-sided generalized gamma distribution (G/spl Gamma/D) for an efficient parametric characterization of speech spectra. G/spl Gamma/D forms a generalized class of parametric distributions, including the Gaussian, Laplacian, and Gamma probability density functions (pdfs) as special cases. We also propose a computationally inexpensive online maximum likelihood (ML) parameter estimation algorithm for G/spl Gamma/D. Likelihoods, coefficients of variation (CVs), and Kolmogorov-Smirnov (KS) tests show that G/spl Gamma/D can model the distribution of the real speech signal more accurately than the conventional Gaussian, Laplacian, Gamma, or generalized Gaussian distribution (GGD).
['Jong Won Shin', 'Joon-Hyuk Chang', 'Nam Soo Kim']
Statistical modeling of speech signals based on generalized gamma distribution
447,694
The Big Data Analysis of the Next Generation Video Surveillance System for Public Security
['Zheng Xu', 'Zhiguo Yan', 'Lin Mei', 'Hui Zhang']
The Big Data Analysis of the Next Generation Video Surveillance System for Public Security
877,339
This paper presents a hardware implementation of multilayer feedforward neural networks (NN) using reconfigurable field-programmable gate arrays (FPGAs). Despite improvements in FPGA densities, the numerous multipliers in an NN limit the size of the network that can be implemented using a single FPGA, thus making NN applications not viable commercially. The proposed implementation is aimed at reducing resource requirement, without much compromise on the speed, so that a larger NN can be realized on a single chip at a lower cost. The sequential processing of the layers in an NN has been exploited in this paper to implement large NNs using a method of layer multiplexing. Instead of realizing a complete network, only the single largest layer is implemented. The same layer behaves as different layers with the help of a control block. The control block ensures proper functioning by assigning the appropriate inputs, weights, biases, and excitation function of the layer that is currently being computed. Multilayer networks have been implemented using Xilinx FPGA "XCV400hq240." The concept used is shown to be very effective in reducing resource requirements at the cost of a moderate overhead on speed. This implementation is proposed to make NN applications viable in terms of cost and speed for online applications. An NN-based flux estimator is implemented in FPGA and the results obtained are presented
['S. Himavathi', 'D. Anitha', 'A. Muthuramalingam']
Feedforward Neural Network Implementation in FPGA Using Layer Multiplexing for Effective Resource Utilization
153,945
Rumors are regular features of crisis events due to the extreme uncertainty and lack of information that often characterizes these settings. Despite recent research that explores rumoring during crisis events on social media platforms, limited work has focused explicitly on how individuals and groups express uncertainty. Here we develop and apply a flexible typology for types of expressed uncertainty. By applying our framework across six rumors from two crisis events we demonstrate the role of uncertainty in the collective sensemaking process that occurs during crisis events.
['Kate Starbird', 'Emma S. Spiro', 'Isabelle Edwards', 'Kaitlyn Zhou', 'Jim Maddock', 'Sindhuja Narasimhan']
Could This Be True?: I Think So! Expressed Uncertainty in Online Rumoring
776,878
Background#R##N#When growing budding yeast under continuous, nutrient-limited conditions, over half of yeast genes exhibit periodic expression patterns. Periodicity can also be observed in respiration, in the timing of cell division, as well as in various metabolite levels. Knowing the transcription factors involved in the yeast metabolic cycle is helpful for determining the cascade of regulatory events that cause these patterns.
['Aliz R. Rao', 'Matteo Pellegrini']
Regulation of the yeast metabolic cycle by transcription factors with periodic activities
483,473
We analyze sampling representations for translation invariant, linear and bounded systems, operating on band-limited signals. First, we characterize suitable kernels for reconstruction processes with and without oversampling. Then, we investigate the convergence behavior of general approximation processes, operating only on the samples and not on the whole continuous-time signal, for translation invariant, linear and bounded systems and signals in the Paley-Wiener space PW pi 1 . Recently, Habib analyzed similar questions for a larger space of functions, namely the Zakai class, but for a considerably smaller class of systems, not including the Hilbert transformation and the ideal low-pass filter. We show that for important systems there exists no approximation process that is uniformly convergent for all functions in PW pi 1 . Surprisingly, oversampling and the design of special kernels does not improve the convergence behavior in this case. Furthermore, a simple criterion is given for checking whether a certain approximation process is convergent for a given system or not.
['Holger Boche', 'Ullrich J. Monich']
General behavior of sampling-based signal and system representation
353,277
Digital libraries in healthcare are hosting an inherently large and continually growing collection of digital information. Especially in medical digital libraries, this information needs to be analyzed and processed in a timely manner. Sensor data streams, for instance, providing continuous information on patients have to be processed on-line in order to detect critical situations. This is done by combining existing services and operators into streaming processes. Since the individual processing steps are quite complex, it is important to efficiently make use of the resources in a distributed system by dynamically parallelizing operators and services. The Grid vision already considers the efficient routing and distribution of service requests. In this paper, we present a novel information management infrastructure based on a hyperdatabase system that combines the process-based composition of services and operators needed for sensor data stream processing with advanced grid features.
['Manfred Wurz', 'Gert Brettlecker', 'Heiko Schuldt']
A combined hyperdatabase and grid infrastructure for data stream management and digital library processes
831,874
The Password Thicket: Technical and Market Failures in Human Authentication on the Web.
['Joseph Bonneau', 'Sören Preibusch']
The Password Thicket: Technical and Market Failures in Human Authentication on the Web.
742,759
The discriminative optimization of decoding networks is important for minimizing speech recognition error. Recently, several methods have been reported that optimize decoding networks by extending weighted finite state transducer (WFST)-based decoding processes to a linear classification process. In this paper, we model decoding processes by using conditional random fields (CRFs). Since the maximum mutual information (MMI) training technique is straightforwardly applicable for CRF training, several sophisticated training methods proposed as the variants of MMI can be incorporated in our decoding network optimization. This paper adapts the boosted MMI and the differenced MMI methods for decoding network optimization so that state transition errors are minimized in WFST decoding. We evaluated the proposed methods by conducting large-vocabulary continuous speech recognition experiments. We confirmed that the CRF-based framework and transition error minimization are efficient for improving the accuracy of automatic speech recognizers.
['Yotaro Kubo', 'Shinji Watanabe', 'Atsushi Nakamura']
Decoding network optimization using minimum transition error training
388,491
Traffic Incident Detection Using Probabilistic Topic Model.
['Akira Kinoshita', 'Atsuhiro Takasu', 'Jun Adachi']
Traffic Incident Detection Using Probabilistic Topic Model.
740,124
A Two Tiers Data Aggregation Scheme for Periodic Sensor Networks.
['Jacques M. Bahi', 'Abdallah Makhoul', 'Maguy Medlej']
A Two Tiers Data Aggregation Scheme for Periodic Sensor Networks.
800,673
We use a sample of 454 service employees to discuss the relationship between service employee' job satisfaction and service quality, and the role of organization citizenship behavior and turnover intention. The results showed that: Service employees' job satisfaction has a significant positive effect on organizational citizenship behavior, and service quality (soft quality and hard quality). In the meanwhile, it has a highly negative impact on turnover intention. Organizational citizenship behavior plays a partly mediating role between job satisfaction and soft quality of service employee. Organizational citizenship behavior and turnover intention are not mediating between service employee's job satisfaction and hard quality of service employee. These conclusions have important theoretical and practical significance for services enterprises to improve human resource management and earn a competitive advantage.
['Xiaojun Zhan']
The Impact Mechanism of Service Employees' Job Satisfaction on Service Quality: The Role of OCB and Turnover Intention
919,415
Among the various costs of a context switch, its impact on the performance of L2 caches is the most significant because of the resulting high miss penalty. To reduce the impact of frequent context switches, we propose restoring a program's locality by prefetching into the L2 cache the data a program was using before it was swapped out. A Global History List is used to record a process' L2 read accesses in LRU order. These accesses are saved along with the process' context when the process is swapped out and loaded to guide prefetching when it is swapped in. We also propose a feedback mechanism that greatly reduces memory traffic incurred by our prefetching scheme. Experiments show significant speedup over baseline architectures with and without traditional prefetching in the presence of frequent context switches.
['Hanyu Cui', 'Suleyman Sair']
Extending data prefetching to cope with context switch misses
213,675
Case retrieval from a clustered case memory consists in finding out the clusters most similar to the new input case, and then retrieving the cases from them. Although the computational time is improved, the accuracy rate may be degraded if the clusters are not representative enough due to data geometry. This paper proposes a methodology for allowing the expert to analyze the case retrieval strategies from a clustered case memory according to the required computational time improvement and the maximum accuracy reduction accepted. The mechanisms used to assess the data geometry are the complexity measures. This methodology is successfully tested on a case memory organized by a Self-Organization Map.
['Albert Fornells', 'Elisabet Golobardes', 'Josep María Montaner Martorell', 'Josep Maria Garrell', 'Núria Macià', 'Ester Bernado']
A Methodology for Analyzing Case Retrieval from a Clustered Case Memory
468,410
Sensorimotor adaptation is an important focus in the study of motor learning for non-disordered speech, but has yet to be studied substantially for speech rehabilitation. Speech adaptation is typically elicited experimentally using LPC resynthesis to modify the sounds that a speaker hears himself producing. This method requires that the participant be able to produce a robust speech-acoustic signal and is therefore not well-suited for talkers with dysarthria. We have developed a novel technique using electromagnetic articulography (EMA) to drive an articulatory synthesizer. The acoustic output of the articulatory synthesizer can be perturbed experimentally to study auditory feedback effects on sensorimotor learning. This work aims to compare sensorimotor adaptation effects using our articulatory resynthesis method with effects from an established, acoustic-only method. Results suggest that the articulatory resynthesis method can elicit speech adaptation, but that the articulatory effects of the two methods differ.
['Jeffrey J. Berry', 'Cassandra North', 'Michael T. Johnson']
Sensorimotor adaptation of speech using real-time articulatory resynthesis
107,821
The Role-based access control (RBAC) is a super set of mandatory access control (MAC) and discretionary access control (DAC). Since MAC and DAC are useful in information flow control that protects privacy within an application, it is certainly that we can use RBAC for privacy concerns. The key benefits of the fundamental RBAC are simplified systems administration and enhanced systems security and integrity. However, it does not consider privacy protection and support controlling method invocation through argument sensitivity. In this paper, a privacy-enhanced role-based access control (PERBAC) model is proposed. Privacy related components, such as purpose, purpose hierarchy, are added to the new model. Also, an information flow analysis technique and a privacy checking algorithm are introduced to support controlling method invocation through argument sensitivity.
['Cungang Yang', 'Chang N. Zhang']
A privacy enhanced role-based access control model for enterprises
342,347
Palmprint Recognition Based on Local Texture Features.
['Slobodan Ribaric', 'Markan Lopar']
Palmprint Recognition Based on Local Texture Features.
764,794
The use of near-IR images for face recognition has been proposed as a means to address illumination issues that can hinder standard visible light face matching. However, most existing non-experimental databases contain visible light images. This makes the matching of near-IR face images to visible light face images an interesting and useful challenge. Image pre-processing techniques can potentially be used to help reduce the differences between near-IR and visible light images, with the goal of improving matching accuracy. We evaluate the use of several such techniques in combination with commercial matchers and show that simply extracting the red plane results in a comparable improvement in accuracy. In addition, we show that many of the pre-processing techniques hinder the ability of existing commercial matchers to extract templates. We also make available a new dataset called Near Infrared Visible Light Database (ND-NIVL) consisting of visible light and near-IR face images with accompanying baseline performance for several commercial matchers.
['John S. Bernhard', 'Jeremiah R. Barr', 'Kevin W. Bowyer', 'Patrick J. Flynn']
Near-IR to visible light face matching: Effectiveness of pre-processing options for commercial matchers
588,645
The interest in network virtualization has been growing steadily among the networking community in the last few years. Network virtualization opens up new possibilities for the evolution path to the Future Internet by enabling the deployment of different architectures and protocols over a shared physical infrastructure. The deployment of network virtualization imposes new requirements and raises new issues in relation to how networks are provisioned, managed and controlled today. The starting point for this paper is the network virtualization reference model conceived in the framework of the EU funded 4WARD project. In this paper we look at network virtualization mainly from the perspective of the network infrastructure provider, following the 4WARD network virtualization architecture and evaluate the main issues and challenges to be faced in commercial operator environments.
['Jorge Carapinha', 'Javier Jimenez']
Network virtualization: a view from the bottom
383,173
We present a lexical platform that has been developed for the Spanish language. It achieves portability between different computer systems and efficiency, in terms of speed and lexical coverage. A model for the full treatment of Spanish inflectional morphology for verbs, nouns and adjectives is presented. This model permits word formation based solely on morpheme concatenation, driven by a feature-based unification grammar. The run-time lexicon is a collection of allomorphs for both stems and endings. Although not tested, it should be suitable also for other Romance and highly inflected languages. A formalism is also described for encoding a lemma-based lexical source, well suited for expressing linguistic generalizations: inheritance classes, lemma encoding, morpho-graphemic allomorphy rules and limited type-checking. From this source base, we can automatically generate an allomorph indexed dictionary adequate for efficient retrieval and processing. A set of software tools has been implemented around this formalism: lexical base augmenting aids, lexical compilers to build run-time dictionaries and access libraries for them, feature manipulation libraries, unification and pseudo-unification modules, morphological processors, a parsing system, etc. Software interfaces among the different modules and tools are cleanly defined to ease software integration and tool combination in a flexible way. Directions for accessing our e-mail and web demonstration prototypes are also provided. Some figures are given, showing the lexical coverage of our platform compared to some popular spelling checkers.
['José M. Goñi', 'José Carlos González', 'Antonio Moreno']
ARIES: A lexical platform for engineering Spanish processing tools
521,795
Sylli: Automatic Phonological Syllabification for Italian.
['Luca Iacoponi', 'Renata Savy']
Sylli: Automatic Phonological Syllabification for Italian.
755,052
Dynamic software updating research efforts have mostly been focused on updating application code and in-memory state. As more and more applications use embedded databases for storage, dynamic updating solutions will have to support changes to embedded database schemas. The first step towards supporting dynamic updates to embedded database schemas is understanding how these schemas change—so far, schema evolution studies have focused on large, enterprise-class databases. In this paper we propose an approach for automatically extracting embedded schemas from regular applications, e.g., written in C and C++, and automatically computing how schemas change as applications evolve. To showcase our approach, we perform a long-term schema evolution study on four popular open source programs that use embedded databases: Firefox, Monotone, BiblioteQ and Vienna. Our study spans 18 cumulative years of schema evolution and reveals that change patterns and frequency in embedded databases differ from schema changes in enterprise-class databases that formed the object of prior studies. Our platform can be used for performing long-term, large-scale embedded schema evolution studies that are potentially beneficial to dynamic updating and schema evolution researchers.
['Shengfeng Wu', 'Iulian Neamtiu']
Schema evolution analysis for embedded databases
701,204
We consider periodic review inventory control problems in directed networks, primary examples of which are distribution systems and assembly systems. External demand could occur at each node. When inventory is insufficient to meet requirements at a node, a portion of this demand is backordered and the remaining is lost. External demands, as well as lead times for inventory purchase, assembly, and transportation, are stochastic. In each period, linear sales revenues and the following costs, all linear, are charged at each node: (a) inventory purchase/assembly/transfer cost to receive inventory, (b) holding cost, (c) backorder cost, and (d) lost sales cost. When the objective function of interest is a discounted sum of profits over a finite planning horizon, it is shown that the sales prices and the inventory purchase/assembly/transfer cost parameters can be assumed to be zero without loss of generality. The result is proved for every realization of demands and lead times. Some extensions to these results are discussed. During this process, we also generalize the concept of echelon inventories to directed networks.
['Ganesh Janakiraman', 'John A. Muckstadt']
Inventory Control in Directed Networks: A Note on Linear Costs
356,246
The combination of competitive security exercises and hands-on learning represents a powerful approach for teaching information system security. Although creating and maintaining such a course can be difficult, the benefits to learning are worthwhile. Our undergraduate Information Assurance course is practice-focused and makes substantial use of competitive exercises, such as the National Security Agency Cyber Defense Exercise, to promote learning. We recount experiences and lessons learned from creating and conducting this course.
['Robert Fanelli', 'Connor Terrence J. O']
Experiences with practice-focused undergraduate security education
674,070
This paper presents a web based knowledge management system that supports information browsing using both semantic tagging and the Cyc knowledge base. Information contents are not linked each other, but tagged in a way that reflects the semantics of documents. The used tags are also related to concepts in the Cyc knowledge base. The relationships that connect these concepts are reported to the tags and to the information contents, building a huge set of dynamical links that support user's information navigation and sharing.
['Ignazio Infantino', 'Giovanni Pilato', 'Riccardo Rizzo', 'Filippo Vella']
A Cyc-based Web System for Semantic Organization, Search and Browsing of Knowledge Items
95,041
In the classic k-center problem, we are given a metric graph, and the objective is to select k nodes as centers such that the maximum distance from any vertex to its closest center is minimized. In this paper, we consider two important generalizations of k-center, the matroid center problem and the knapsack center problem. Both problems are motivated by recent content distribution network applications. Our contributions can be summarized as follows: (1) We consider the matroid center problem in which the centers are required to form an independent set of a given matroid. We show this problem is NP-hard even on a line. We present a 3-approximation algorithm for the problem on general metrics. We also consider the outlier version of the problem where a given number of vertices can be excluded as outliers from the solution. We present a 7-approximation for the outlier version. (2) We consider the (multi-)knapsack center problem in which the centers are required to satisfy one (or more) knapsack constraint(s). It is known that the knapsack center problem with a single knapsack constraint admits a 3-approximation. However, when there are at least two knapsack constraints, we show this problem is not approximable at all. To complement the hardness result, we present a polynomial time algorithm that gives a 3-approximate solution such that one knapsack constraint is satisfied and the others may be violated by at most a factor of $$1+\epsilon $$1+∈. We also obtain a 3-approximation for the outlier version that may violate the knapsack constraint by $$1+\epsilon $$1+∈.
['Danny Z. Chen', 'Jian Li', 'Hongyu Liang', 'Haitao Wang']
Matroid and Knapsack Center Problems
892,753
The richness of crossmodal feedback in car driving makes it an engaging, complex, yet “natural” activity. Audition plays an important role, as the engine sound, perceived in the cabin, conveys relevant cues about the vehicle motion. In this paper, we introduce a procedural and physically informed model for synthetic combustion engine sound, as an effective, flexible and computationally efficient alternative to sample-based and analysis/resynthesis approaches. The sound model, currently being developed as Max/MSP external, has been integrated in GeneCars, a driving simulator environment for industrial sound design, and SkAT Studio, a demonstration framework for the rapid creation of audio processing workflows.
['Stefano Baldan', 'Hélène Lachambre', 'Stefano Delle Monache', 'Patrick Boussard']
Physically informed car engine sound synthesis for virtual and augmented environments
575,628
Capacitive Leadframe testing is an effective approach for detecting faults in printed circuit boards. Capacitance measurements, however, are affected by mechanical variations during testing and by tolerances of electrical parameters of components, making it difficult to use threshold based techniques for defect detection. A novel approach is presented for identifying boards that are likely to be outliers. Based on Principal Components Analysis (PCA), this approach treats the set of capacitance measurements of individual connectors or sockets in a holistic manner to overcome the measurement and component parameter variations inherent in test data. The effectiveness of the method is evaluated using measurements on three different boards. Enhancements to the technique to increase the resolution of the method are presented and evaluated.
['Xin He', 'Yashwant K. Malaiya', 'Anura P. Jayasumana', 'Kenneth P. Parker', 'Stephen Hird']
An outlier detection based approach for PCB testing
147,079
The maximum likelihood detection with QR decomposition and M-algorithm (QRM-MLD) has been presented as a sub- optimum multiple input-multiple output (MIMO) detection scheme which can provide almost the same performance as the optimum maximum likelihood (ML) MIMO detection scheme but with the reduced complexity. However, due to the lack of parallelism and the regularity in the decoding structure, the conventional QRM-MLD which uses the tree-structure still has very high complexity for the very large scale integration (VLSI) implementation. In this paper, we modify the tree- structure of conventional QRM-MLD into trellis-structure in order to obtain high operational parallelism and regularity and then apply the Viterbi algorithm to the QRM-MLD to ease the burden of the VLSI implementation. We show from our selected numerical examples that, by using the QRM-MLD with our proposed trellis-structure, we can reduce the complexity significantly comparing to the tree-structure based QRM-MLD while the performance degradation of our proposed scheme is negligible.
['S.H. Choi', 'Young Chai Ko']
Implementation-Friendly QRM-MLD using Trellis-Structure based on Viterbi Algorithm
148,400
This paper presents an attack-resilient estimation scheme for uniformly observable nonlinear systems having redundant sensors when a subset of sensors is corrupted by adversaries. We first design an individual high-gain observer from each measurement output so that partial information of system state is obtained. Then, a nonlinear error correcting problem is formulated by collecting all the information from each partial observers and it can be solved by exploiting redundancy. For most of the time, a computationally efficient monitoring system is running and it detects every influential attacks. A simple switching logic explores another combination of output measurement to find a correct candidate only when a residual signal exceeds its threshold.
['Junsoo Kim', 'Chanhwa Lee', 'Hyungbo Shim', 'Yongsoon Eun', 'Jin H. Seo']
Detection of sensor attack and resilient state estimation for uniformly observable nonlinear systems
976,959
Tweeting while watching TV has become a popular phenomenon in the United States, so much so TV networks actively encourage tweeting through scheduling and incentives. Through collected tweets and interviews during the TV show Glee , this study explores what makes live-tweeting compelling for participant viewers. Early results of this ongoing project suggest that sharing a social experience with others and expressing oneself to a larger crowd (1) enhance one's experience of watching a television simulcast, and (2) motivates continued live-tweeting behaviors.
['Kimra McPherson', 'Kai Huotari', 'F. Yo-Shang Cheng', 'David Humphrey', 'Coye Cheshire', 'Andrew L. Brooks']
Glitter: a mixed-methods study of twitter use during glee broadcasts
93,613
Web servers such as Apache and web proxies like Squid support event logging using a common log format. The logs produced using these de-facto standard formats are invaluable to system administrators for troubleshooting a server and tool writers to craft tools that mine the log files and produce reports and trends. The Session Initiation Protocol (SIP) does not have a common log format, and as a result, each server supports a distinct log format. This plethora of formats discourages the creation of common tools. Whilst SIP is similar to HTTP, there are a number of fundamental differences between a session-mode protocol and a stateless request-response protocol. We propose a common log file format for SIP servers that can be used uniformly by proxies, registrars, redirect servers as well as back-to-back user agents. Such a canonical file can be used to train anomaly detection systems and feed events into a security event management system.
['Vijay K. Gurbani', 'Eric Burger', 'Carol Davids', 'Tricha Anjali']
SIP CLF: a common log format (CLF) for the session initiation protocol (SIP)
774,280
The effect of natural weathering on the mechanical property of a kind of polyester fabric is studied. An outdoor exposure experiment for polyester fabric in Harbin from November 2009 to June 2010 was performed and several polyester fabric samples were chosen each tow months for simple tension experiment to evaluate their mechanical property variation. The mechanical experiment results show that tensile strength and Young's modulus decrease as the exposure period increases. A damage model is presented to evaluate the degradation of polyester fabric. By comparing with the mechanical experiment, the proposed damage model agrees with the experimental results very well.
['Huifeng Tan', 'Xianghong Bai', 'Guochang Lin']
The influence of natural weathering on the property of polyester fabric
328,867
The mean-shift algorithm has achieved considerable success in object tracking due to its simplicity and robustness. However, the lack of template update often leads to out of adaptation to affine transformation of the object. The Lucas-Kanade algorithm has some advantages in obtaining the affine parameters. In this paper, we introduce the inverse compositional algorithm, which is equivalent to but more efficient than Lucas-Kanade algorithm, to complement the traditional mean-shift algorithm. In this method, the average of squared error (ASE) between the initial template and the object image which is warped through the obtained affine parameters is computed to decide whether to update the current template. Experimental results show that the mean-shift tracking with Lucas-Kanade algorithm (MSLK) has high tracking accuracy and good robustness to the change of appearance of the object.
['Lurong Shen', 'Xinsheng Huang', 'Wanying Xu', 'Yongbin Zheng']
Robust Visual Tracking by Integrating Lucas-Kanade into Mean-Shift
441,248
The dynamic spectrum access (DSA) capability of cognitive radio networks (CRN) promises to resolve both the spectrum scarcity and the low spectrum utilization problems caused by today's static spectrum access (SSA) policy. With DSA, CRN nodes search the dynamically accessible spectrum bands for communication. In this paper, we study a random DSA scheme, where each node randomly selects its operating band based on the locally detected accessible spectrum bands. This scheme does not need the coordination or exchange of control messages to select a communication band between a sender and a receiver, and is desirable in certain scenarios. We analyze the performance of this random DSA scheme. Numerical results show that the random DSA scheme can achieve 60% of the theoretical maximum performance.
['Chunsheng Xin', 'Min Song', 'Liangping Ma', 'George Hsieh', 'Chien-Chung Shen']
On Random Dynamic Spectrum Access for Cognitive Radio Networks
169,480
This article develops a general technique for differential analysis that can be applied to singularities of three related problems: path tracking for nonredundant robots, self-motion analysis for robots with one degree of redundancy, and displacement analysis of single-loop mechanisms. For each of these problems, the locus of displacement solutions generally forms a set of one-dimensional manifolds in the space of variable parameters. However, if singularities occur, the manifolds may degenerate into isolated points, or into curves that include bifurcations at the singular points. Higher-order equations, derived from Taylor series expansion of the matrix equation of closure, are solved to identify singularity type and, in the case of bifurcations, to determine the number of intersecting branches as well as a Taylor series expansion of each branch about the point of bifurcation. To avoid unbounded mathematics, branch expansions are derived in terms of an introduced curve parameter. The results are useful for identifying singularity type, for numerical curve tracking with continuation past bifurcations on any chosen branch, and for determining exact rate relations for each branch at a bifurcation. The noniterative solution procedure involves configuration-dependent systems of equations that are evaluated by recursive algorithm, then solved using singular value decomposition, polynomial equation solution, and linear system solution. Examples show applications to RCRCR mechanisms and the Puma manipulator. >
['Jon C. Kieffer']
Differential analysis of bifurcations and isolated singularities for robots and mechanisms
413,627
This paper deals with the use of Bayesian Belief Networks in order to improve the accuracy and training time of character segmentation for unconstrained handwritten text. Comparative experimental results have been evaluated against Naive Bayes classification, which is based on the assumption of the independence of the parameters and two additional previous commonly used methods. Results have depicted that obtaining the inferential dependencies of the training data, could lead to the reduction of the required training time and size by a factor of 55%. Moreover, the achieved accuracy in detecting segment boundaries exceeds 86% whereas limited training data are proved to endow with very satisfactory results.
['Manolis Maragoudakis', 'Ergina Kavallieratou', 'Nikos Fakotakis', 'George K. Kokkinakis']
How conditional independence assumption affects handwritten character segmentation
536,357
Many current performance analysis systems offer little more than basic measurement and analysis facilities for locating the sources of poor performance, such as load imbalance, communication overhead and synchronization loss. We believe that this is only part of the solution and a system which can provide higher level performance measurement and analysis is highly desirable. In this paper, we describe a new approach to designing performance tuning tools for parallel processing systems. A primary contribution of this work is to explore the way in which the strategies and algorithms used in parallel programs contribute to the poor performance. In order to detect the strategies and algorithms used in parallel programs, a technique called Automatic Program Analysis is used. Our goal is to provide users with higher level performance advices. We present a case study describing how a prototype implementation of our technique was able to identify the performance problem and provide tuning advice.
['Kei Chun Li', 'Kang Zhang']
Tuning parallel program through automatic program analysis
167,674
Performance improvement and energy efficiency are two important goals in provisioning Internet services in datacenter servers. In this article, we propose and develop a self-tuning request batching mechanism to simultaneously achieve the two correlated goals. The batching mechanism increases the cache hit rate at the front-tier Web server, which provides the opportunity to improve an application’s performance and the energy efficiency of the server system. The core of the batching mechanism is a novel and practical two-layer control system that adaptively adjusts the batching interval and frequency states of CPUs according to the service level agreement and the workload characteristics. The batching control adopts a self-tuning fuzzy model predictive control approach for application performance improvement. The power control dynamically adjusts the frequency of Central Processing Units (CPUs) with Dynamic Voltage and Frequency Scaling (DVFS) in response to workload fluctuations for energy efficiency. A coordinator between the two control loops achieves the desired performance and energy efficiency. We further extend the self-tuning batching with DVFS approach from a single-server system to a multiserver system. It relies on a MIMO expert fuzzy control to adjust the CPU frequencies of multiple servers and coordinate the frequency states of CPUs at different tiers. We implement the mechanism in a test bed. Experimental results demonstrate that the new approach significantly improves the application performance in terms of the system throughput and average response time. At the same time, the results also illustrate the mechanism can reduce the energy consumption of a single-server system by 13p and a multiserver system by 11p, respectively.
['Dazhao Cheng', 'Yanfei Guo', 'Changjun Jiang', 'Xiaobo Zhou']
Self-Tuning Batching with DVFS for Performance Improvement and Energy Efficiency in Internet Servers
470,308
Network on Chip (NoC) has become a promising solution for the communication paradigm of the next-generation multiprocessor system-on-chip (MPSoC). As communication has become an integral part of on-chip computing, researchers are paying more attention to its implementation and optimization. Traditional techniques that model inter-processor communication inaccurately will lead to unexpected runtime performance, which is on average 90.8% worse than the predicted results based on an observation. In this paper, we present an application mapping and scheduling technique for NoC-based MPSoCs that integrates fine-grain optimization on inter-processor communications with the objective of minimizing the schedule length. A communication model is proposed to address properly the latency of inter-processor communication with network contention. Performance evaluation results show that solutions obtained by the proposed technique can generate realistic performance that is on average 34.7% higher than traditional techniques, and the Integer-Linear Programming (ILP) based approach can outperform the state-of-the-art heuristic algorithms by 31.1%. A case study on H.264 HDTV decoder shows that our approach achieves 22.8% improvement in prediction accuracy, 20.9% improvement in performance and 40% reduction in the number of network contentions.
['Lei Yang', 'Weichen Liu', 'Weiwen Jiang', 'Wei Zhang', 'Mengquan Li', 'Juan Yi', 'Duo Liu', 'Edwin Hsing Mean Sha']
Traffic-Aware Application Mapping for Network-on-Chip Based Multiprocessor System-on-Chip
553,387
Performance Prediction Toolkit (PPT) is a simulator mainly developed at Los Alamos National Laboratory to facilitate rapid and accurate performance prediction of large-scale scientific applications on existing and future HPC architectures. In this paper, we present three interconnect models for performance prediction of large-scale HPC applications. They are based on interconnect topologies widely used in HPC systems: torus, dragonfly, and fat-tree. We conduct extensive validation tests of our interconnect models, in particular, using configurations of existing HPC systems. Results show that our models provide good accuracy for predicting the network behavior. We also present a performance study of a parallel computational physics application to show that our model can accurately predict the parallel behavior of large-scale applications.
['Kishwar Ahmed', 'Jason Liu', 'Stephan Eidenbenz', 'Joe Zerr']
Scalable Interconnection Network Models for Rapid Performance Prediction of HPC Applications
997,367
This paper examines receiving antenna selection (RAS) and receiving antenna combination (RAC) techniques from the viewpoint of receiver structure with the aim of improving bit error rate (BER) performance for receivers in MIMO (multi-input multi-output) systems. We assume two receiver structures. One is a structure for controlling gain in received signals centrally for all receiving antennas (CC receiver). The other is a structure for controlling gain in received signals individually in each receiving antenna (CI receiver). We show that a CC receiver can obtain good BER performance when utilizing RAS/RAC techniques using the channel matrix eigenvalue (RAS/RAC-E). Additionally, we consider RAS/RAC techniques using the phase of the channel component (RAS/RAC-PC) and received power (RAS/RAC-RP) in a CI receiver. We then simulate BER performance when employing the proposed RAS-PC and RAS-RP techniques under Rayleigh fading channels. The results clearly show that the RAS/RAC-PC techniques are able to obtain good BER performance for a CI receiver.
['Yutaka Murakami', 'Kiyotaka Kobayashi', 'Masayuki Orihashi', 'Takashi Matsuoka']
Investigation of diversity techniques considering receiver structure in MIMO systems
525,706
This paper focuses on decentralized personalized search engines. It is composed of three parts. Firstly, we formulate the problem and we propose a graph-based measure of quality of a document given a user and a word. This quality gives a formal measure to test the personalized relevance of answers provided by search engines. Secondly, we present a decentralized system namely MAAY which aims to provide personalized results to its users. Users can share, search and retrieve documents from others. Nodes locally learn others' profile from previous interactions and use feedbacks to raise words characterizing a document. By querying preferentially nodes with similar profile and by ranking documents according to the requester profile, nodes may find documents which are relevant for them. Finally, we propose a framework for experimental studies of such system. Then, we use it to show that the main principles of MAAY could benefits to users.
['Frederic Dang Ngoc', 'Joaquín Keller', 'Gwendal Simon']
MAAY: a decentralized personalized search system
368,682
We investigate coin-flipping protocols for multiple parties in a quantum broadcast setting: (1) we propose and motivate a definition for quantum broadcast. Our model of quantum broadcast channel is new. (2) We discovered that quantum broadcast is essentially a combination of pairwise quantum channels and a classical broadcast channel. This is a somewhat surprising conclusion, but helps us in both our lower and upper bounds. (3) We provide tight upper and lower bounds on the optimal bias /spl epsiv/ of a coin which can be flipped by k parties of which exactly g parties are honest: for any 1 /spl les/ g /spl les/ k, /spl epsiv/ = 1/2 - /spl Theta/ (g/k). Thus, as long as a constant fraction of the players are honest, they can prevent the coin from being fixed with at least a constant probability. This result stands in sharp contrast with the classical setting, where no non-trivial coin-flipping is possible when g /spl les/ k/2.
['Andris Ambainis', 'Harry Buhrman', 'Yevgeniy Dodis', 'Hein Röhrig']
Multiparty quantum coin flipping
147,492
Acoustic focusing by microphone array for human interactive mobile robot.
['Hiroshi Mizoguchi', 'Yoshiyuki Kato', 'Kazuyuki Hiraoka', 'Masaru Tanaka', 'Takaomi Shigehara', 'Taketoshi Mishima']
Acoustic focusing by microphone array for human interactive mobile robot.
992,528
This paper proposes a new semidefinite programming relaxation for the satisfiability problem. This relaxation is an extension of previous relaxations arising from the paradigm of partial semidefinite liftings for 0/1 optimization problems. The construction of the relaxation depends on a choice of permutations of the clauses, and different choices may lead to different relaxations. We then consider the Tseitin instances, a class of instances known to be hard for certain proof systems, and prove that for any choice of permutations, the proposed relaxation is exact for these instances, meaning that a Tseitin instance is unsatisfiable if and only if the corresponding semidefinite programming relaxation is infeasible.
['Miguel F. Anjos']
An Extended Semidefinite Relaxation for Satisfiability
455,857
In this study, we investigate the effect of the attributes of humans, such as sex and age, on their psychological evaluation of humanoids. We used 11 humanoids in order to investigate the basic tendency of humans to evaluate humanoids. In addition, we included wheeled-walking robots, biped-walking robots, and androids in order to consider the influence of the type of humanoid. We collected data from 2,624 Japanese individuals, ranging from teenagers to people in their 70s, in three major cities in order to obtain maximally representative data. For our psychological scale, we used a humanoid-oriented scale that was developed on the basis of parameters for the evaluation of humanoids according to the perspectives of ordinary people. These parameters are familiarity, utility, and humanness. The results show that middle-aged and older females tend to rate the familiarity and humanness of all humanoids higher, adolescents tend to rate the familiarity and utility of wheeled-walking humanoids higher and the utility of androids lower, and middle-aged people tend to rate the utility of all humanoids higher. We discuss the improved design of humanoids considering both human characteristics and types of humanoids.
['Hiroko Kamide', 'Yasushi Mae', 'Koji Kawabe', 'Satoshi Shigemi', 'Tatsuo Arai']
Effect of human attributes and type of robots on psychological evaluation of humanoids
307,001
Cloud Computing becomes interesting for enterprises across all branches. Renting computing capabilities from external providers avoids initial investments, as only those resources have to be paid that were used eventually. Especially in the context of “Big Data†this pay-as-you-go accounting model is particularly important. The dynamically scalable resources from the Cloud enable enterprises to store or analyze these huge amounts of unstructured data without using their own hardware infrastructure. However, Cloud Computing is currently facing severe data security and protection issues. These challenges require new ways to store and analyze data, especially when huge data volumes with sensitive data are stored at external locations. The presented approach separates data on database table level into independent chunks and distributes them across several clouds. Hence, this work is a contribution to a more secure and resilient cloud architecture as multiple public and private cloud providers can be used independently to store data without losing data security and privacy constraints.
['Jens Kohler', 'Kiril Simov', 'Thomas Specht']
Analysis of the Join Performance in Vertically Distributed Cloud Databases
902,991
Mobile software objects are computational entities that travel in large-scale and widely-distributed heterogeneous systems, and whose functionality can be attached to diverse computing environments. When employed over decentralized sites with operational and administrative autonomy, support for mobility raises difficult issues with respect to object management services. In particular it impacts persistence, reference handling, object naming, and requires extensive support for security. This paper discusses the requirements from an object management system that incorporates mobile, autonomous and reflective objects and presents the design and implementation of the Mobile Object Manager (MOM) which fulfills these requirements.
['Boris Lavva', 'Ophir Holder', 'Israel Ben-Shaul']
Object management for network-centric systems with mobile objects
163,576
Modern smart phones are now capable of gathering information about a user's social interactions. The authors have developed and deployed Nodobo, a suite of social sensor software for Android. Our first study group is a class of senior high school students, each using a Google Nexus One mobile phone running Nodobo, which we use to capture their device usage patterns and social interactions. We provide an overview of the system architecture, describe the trial, and share some initial results.
['Stephen David Bell', 'Alisdair McDiarmid', 'James Irvine']
Nodobo: Mobile Phone as a Software Sensor for Social Network Research
298,423
We propose a per-tone frequency-domain equalization approach for OFDM over doubly-selective channels. We consider the most general case, where the doubly-selective channel delay spread is larger than the cyclic prefix (CP), which results into inter-block interference (IBI). IBI in conjunction with the Doppler effect destroys the orthogonality between subcarriers and hence, results into severe intercarrier interference (ICI). In this paper, we propose a novel per-tone frequency-domain equalizer (PTFEQ) that is obtained through transferring a time-varying time-domain equalizer (TV-TEQ) to the frequency-domain. The purpose of the TV-TEQ is to restore orthogonality between subcarriers and eliminate ICI. We use the mean-square error criterion to design the PTFEQ. An efficient implementation of the proposed PTFEQ is also discussed. Finally, we show some simulation results of the proposed equalization technique.
['Imad Barhumi', 'Geert Leus', 'Marc Moonen']
Per-tone equalization for OFDM over doubly-selective channels
481,399
In distributed cloud storages fault tolerance is maintained by regenerating the lost coded data from the surviving clouds. Recent studies suggest using maximum distance separable (MDS) network codes in cloud storage systems to allow efficient and reliable recovery after node faults. MDS codes are designed to use a substantial number of repair nodes and rely on centralized management and a static fully connected network between the nodes. However, in highly dynamic environments, like edge caching in communication networks or peer-to-peer networks, availability of the nodes and the communication links is very volatile. In these scenarios MDS codes functionality is limited. In this paper we study a non-MDS network coded approach, which operates in a decentralized manner and requires a small number of repair nodes for node recovery. We investigate long-term behavior and durability of the modeled system in terms of the storage life time, i.e. the number of the cycles of nodes failure and recovery after which the storage no longer have enough data to decode the original source packets. We demonstrate, analytically and numerically, the life time gains over uncoded storage.
['Vitaly Abdrashitov', 'Muriel Medard']
Durable network coded distributed storage
709,156