abstract
stringlengths
7
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
6
367
__index_level_0__
int64
5
1,000k
This article reports on a project, involving three New Zealand schools, which investigated teachers' understanding of information literacy and their associated classroom practices. Recently published work, while lamenting school students' lack of information literacy skills, including working with online resources, provides little research investigating classroom teachers' knowledge of information literacy skills and their related pedagogical practice. The findings of this project indicate that while some of the teachers in this project had a reasonably good understanding of the concept of information literacy, very few reported developing their students' information literacy skills.
['Elizabeth Probert']
Information literacy skills: Teacher understandings and practice
441,923
Randomness properties of stream ciphers for wireless communications.
['Benny Y. Zhang', 'Guang Gong']
Randomness properties of stream ciphers for wireless communications.
914,998
Inducing Political Attitude Change and Shifted Voting Intentions During a General Election.
['Thomas Strandberg', 'Lars Hall', 'Petter Johansson']
Inducing Political Attitude Change and Shifted Voting Intentions During a General Election.
788,836
Matrix and tensor factorization have been applied to a number of semantic relatedness tasks, including paraphrase identification. The key idea is that similarity in the latent space implies semantic relatedness. We describe three ways in which labeled data can improve the accuracy of these approaches on paraphrase classification. First, we design a new discriminative term-weighting metric called TF-KLD, which outperforms TF-IDF. Next, we show that using the latent representation from matrix factorization as features in a classification algorithm substantially improves accuracy. Finally, we combine latent features with fine-grained n-gram overlap features, yielding performance that is 3% more accurate than the prior state-of-the-art.
['Yangfeng Ji', 'Jacob Eisenstein']
Discriminative Improvements to Distributional Sentence Similarity
612,824
We present in this paper an analysis of retry limit for supporting VoIP in IEEE 802.11e EDCA WLANs. The proposed research considers packet delay as an important factor of QoS performance measure for providing time-bounded service, such as VoIP and video in WLANs which usually implement default parameters. We also present an analytical method based on a Markov chain model that allows us to derive theoretical bounds for retry limit analysis. Based on the Markov chain model that considers retry limit, we can analyze the performance of VoIP over EDCA mechanism. In addition, in order to resolve the bottleneck transmission in VoIP in the downlink of AP, we maintain that the best service can be achieved when the throughputs of uplink and downlink are the same. Such a cross performance point is studied with respect to different retry limit parameters. Finally, extensive simulations have been carried out to validate the analytical model as well as to find out the corresponding cross point. We demonstrate that an enhanced VoIP capacity can be achieved by selecting appropriate retry limit and contention window size parameters.
['Byung Joon Oh', 'Chang Wen Chen']
Analysis of Retry Limit for Supporting VoIP in IEEE 802.11e EDCA WLANs
429,556
Dynamic programming algorithms have been successfully applied to propositional stochastic planning problems by using compact representations, in particular algebraic decision diagrams, to capture domain dynamics and value functions. Work on symbolic dynamic programming lifted these ideas to first order logic using several representation schemes. Recent work introduced a first order variant of decision diagrams (FODD) and developed a value iteration algorithm for this representation. This paper develops several improvements to the FODD algorithm that make the approach practical. These include, new reduction operators that decrease the size of the representation, several speedup techniques, and techniques for value approximation. Incorporating these, the paper presents a planning system, FODD-PLANNER, for solving relational stochastic planning problems. The system is evaluated on several domains, including problems from the recent international planning competition, and shows competitive performance with top ranking systems. This is the first demonstration of feasibility of this approach and it shows that abstraction through compact representation is a promising approach to stochastic planning.
['Saket Joshi', 'Roni Khardon']
Probabilistic relational planning with first order decision diagrams
71,691
Underwater wireless sensor networks (UWSNs) have been developed for a set of underwater applications, including the resource exploration, pollution monitoring, tactical surveillance, and so on. However, the complexity and diversity of the underwater environment differentiate it significantly from the terrestrial environment. In particular, the coverage requirements (i.e., coverage degrees and coverage probabilities) at different regions probably differ underwater. Nevertheless, little effort has been made so far on the topology control of UWSNs given the diverse coverage requirements. To this end, this article proposes two algorithms for the diverse coverage problem in UWSNs: (1) Traversal Algorithm for Diverse Coverage (TADC), which adjusts the sensing radii of nodes successively, that is, at each round only one node alters its sensing radius, and (2) Radius Increment Algorithm for Diverse Coverage (RIADC), which sets the sensing radii of nodes incrementally, that is, at each round multiple nodes may increase their sensing radii simultaneously. The performances of TADC and RIADC are analyzed through mathematical analysis and simulations. The results reveal that both TADC and RIADC can achieve the diverse coverage while minimizing the energy consumption. Moreover, TADC and RIADC perform nicely in obtaining optimal sensing radii and reducing message complexity, respectively. Such merits further indicate that TADC and RIADC are suitable for small-scale and large-scale UWSNs, respectively.
['Linfeng Liu', 'Jingli Du', 'Ye Liu']
Topology Control for Diverse Coverage in Underwater Wireless Sensor Networks
892,093
It is acknowledged that some obesity trajectories are set early in life, and that rapid weight gain in infancy is a risk factor for later development of obesity. Identifying modifiable factors associated with early rapid weight gain is a prerequisite for curtailing the growing worldwide obesity epidemic. Recently, much attention has been given to findings indicating that gut microbiota may play a role in obesity development. We aim at identifying how the development of early gut microbiota is associated with expected infant growth. We developed a novel procedure that allows for the identification of longitudinal gut microbiota patterns (corresponding to the gut ecosystem developing), which are associated with an outcome of interest, while appropriately controlling for the false discovery rate. Our method identified developmental pathways of Staphylococcus species and Escherichia coli that were associated with expected growth, and traditional methods indicated that the detection of Bacteroides species at day 30 was associated with growth. Our method should have wide future applicability for studying gut microbiota, and is particularly important for translational considerations, as it is critical to understand the timing of microbiome transitions prior to attempting to manipulate gut microbiota in early life.
['Richard A. White', 'Jørgen V. Bjørnholt', 'Donna D. Baird', 'Tore Midtvedt', 'Jennifer R. Harris', 'Marcello Pagano', 'Winston Hide', 'Knut Rudi', 'Birgitte Moen', 'Nina Iszatt', 'Shyamal D. Peddada', 'Merete Eggesbø']
Novel Developmental Analyses Identify Longitudinal Patterns of Early Gut Microbiota that Affect Infant Growth
133,923
An Outline of Development Process Framework for Software based on Open-source Components
['Jakub Swacha', 'Karolina Muszyńska', 'Zygmunt Drążek']
An Outline of Development Process Framework for Software based on Open-source Components
743,013
Flow* 1.2: More Effective to Play with Hybrid Systems.
['Xin Chen', 'Sriram Sankaranarayanan', 'Erika Ábrahám']
Flow* 1.2: More Effective to Play with Hybrid Systems.
980,232
In this study learning activities were designed with the focus on students' English writing and speaking performance. The Virtual Pen (VPen), multimedia web annotation system were provided to help student participate at learning activities by creating annotations, sharing them and giving feedback to peers' work. One experiment was conducted with VPen in an English class for a period of one semester and the following results were obtained. Students perceived VPen as easy to use, useful during participation at learning activities and students had positive attitude toward using VPen. Besides, students found designed activities as useful in improving their writing and speaking performance and playful. Furthermore, students' actual VPen usage had significant correlation with speaking and writing performance. Further investigation demonstrated that students' speaking and writing performance significantly correlated with learning achievement. Based on our finding we conclude that learning activities designed in this study with VPen system employed facilitate students' writing and speaking skill and therefore improve their learning achievement.
['Wu-Yuin Hwang', 'Rustam Shadiev', 'Szu-Min Huang']
Effect of multimedia annotation system on improving English writing and speaking performance
441,444
The CONNECT European project that started in February 2009 aims at dropping the interoperability barrier faced by today’s distributed systems. It does so by adopting a revolutionary approach to the seamless networking of digital systems, that is, synthesizing on the fly the connectors via which networked systems communicate.  CONNECT then investigates formal foundations for connectors together with associated automated support for learning, reasoning about and adapting the interaction behavior of networked systems.
['Valérie Issarny', 'Bernhard Steffen', 'Bengt Jonsson', 'Gordon S. Blair', 'Paul Grace', 'Marta Z. Kwiatkowska', 'Radu Calinescu', 'Paola Inverardi', 'Massimo Tivoli', 'Antonia Bertolino', 'Antonino Sabetta']
CONNECT Challenges: Towards Emergent Connectors for Eternal Networked Systems
339,723
The role of adaptivity in two-level adaptive branch prediction
['Stuart Sechrest', 'Chih-Chieh Lee', 'Trevor N. Mudge']
The role of adaptivity in two-level adaptive branch prediction
926,440
Summary: We present a database of fully sequenced and published genomes to facilitate the re-distribution of data and ensure reproducibility of results in the field of computational genomics. For its design we have implemented an extremely simple yet powerful schema to allow linking of genome sequence data to other resources. Availability: http://maine.ebi.ac.uk:8000/services/cogent/
['Paul Janssen', 'Anton J. Enright', 'Benjamin Audit', 'Ildefonso Cases', 'Leon Goldovsky', 'Nicola Harte', 'Victor Kunin', 'Christos A. Ouzounis']
COmplete GENome Tracking (COGENT): a flexible data environment for computational genomics.
64,081
To be truly useful, robots should be able to handle a variety of tasks in diverse environments without the need for re-programming. Current systems, however, are typically task-specific. Aiming toward autonomous robots capable of acquiring new behaviors and task capabilities, we describe a method by which a human teacher can instruct a robot student how to accomplish new tasks. During the course of training, the robot learns both the sequence of behaviors it should execute and, if needed, the behaviors themselves. It also stores each learning episode as a case for later generalization and reuse.
['Nathan P. Koenig']
Demonstration-Based Behavior and Task Learning
44,363
Design patterns are a proven way to build high-quality software. The number of design patterns is rising rapidly, while management and searching facilities seems not to catch up. This is why selecting a suitable design pattern is not always an easy task. This issue is especially clear for less experienced developers. In this paper we present our approach to cope with the presented issue - an experiment prototype of a new design pattern repository, based on semantic web technologies. Since new ontology-based design pattern repository is a work in progress we point out its potentials for improving design pattern adoption.
['Luka Pavlic', 'Marjan Hericko', 'Vili Podgorelec']
Improving design pattern adoption with Ontology-Based Design Pattern Repository
387,197
Due to its higher capacity factor and proximity to densely populated areas, offshore wind power with integrated energy storage could satisfy > 20% of U.S. electricity demand. Similar results could also be obtained in many parts of the world. The offshore environment can be used for unobtrusive, safe, and economical utility-scale energy storage by taking advantage of the hydrostatic pressure at ocean depths to store energy by pumping water out of concrete spheres and later allowing it to flow back in through a turbine to generate electricity. The storage spheres are an ideal complement to energy harvesting machines, such as floating wind turbines (FWTs). The system could provide near-base-load-quality utility-scale renewable energy and do double duty as the anchoring point for the generation platforms. Analysis indicates that storage can be economically feasible at depths as shallow as 200 m, with cost per megawatt hour of storage dropping until 1500 m before beginning to trend upward. The sweet spot occurs when the concrete wall thickness to withstand the hydrostatic pressure provides enough ballast mass, and this will depend on the strength of used concrete and reinforcement. In addition, the required concrete would use significant amounts of fly ash from coal-fired power plants, and the spheres can serve as artificial reefs.
['Alexander H. Slocum', 'Gregory E. Fennell', 'Gökhan Dündar', 'Brian G. Hodder', 'James D. C. Meredith', 'Monique Sager']
Ocean Renewable Energy Storage (ORES) System: Analysis of an Undersea Energy Storage Concept
257,210
Increasingly, national and international governments have a strong mandate to develop national e-health systems to enable delivery of much-needed healthcare services. Research is, therefore, needed into appropriate security and reliance structures for the development of health information systems which must be compliant with governmental and alike obligations. The protection of e-health information security is critical to the successful implementation of any e-health initiative. To address this, this paper proposes a security architecture for index-based e-health environments, according to the broad outline of Australia’s National E-health Strategy and National E-health Transition Authority (NEHTA)’s Connectivity Architecture. This proposal, however, could be equally applied to any distributed, index-based health information system involving referencing to disparate health information systems. The practicality of the proposed security architecture is supported through an experimental demonstration. This successful prototype completion demonstrates the comprehensibility of the proposed architecture, and the clarity and feasibility of system specifications, in enabling ready development of such a system. This test vehicle has also indicated a number of parameters that need to be considered in any national indexed-based e-health system design with reasonable levels of system security. This paper has identified the need for evaluation of the levels of education, training, and expertise required to create such a system.
['Vicky Liu', 'William J. Caelli', 'Yingsen Yang', 'Lauren May']
A test vehicle for compliance with resilience requirements in index-based e-health systems
687,951
Total generalised variation (TGV) methods are highly efficient for eliminating the staircase artefacts. However, with the aim of further avoiding over-smoothing edges, this study investigates a new weighted second-order TGV scheme for image restoration. Computationally, an alternating split Bregman algorithm is employed to obtain the optimal solution recursively. Moreover, the rigorous convergence analysis of the resulting algorithm is also described in brief. In comparison with the results of current state-of-the-art regulariser techniques, numerical simulations distinctly demonstrate the competitive performance of the proposed strategy in feature preservation and staircasing effect suppression.
['Xinwu Liu']
Weighted total generalised variation scheme for image restoration
643,887
In this paper, we discuss the general area of software development for reuse and reuse guidelines. We identify, in detail, language-oriented and domain-oriented guidelines whose effective use affects component reusability. This paper also proposes a tool support which can provide advise and can generate reusable components automatically and it is based on domain knowledge (reuse guidelines represented as domain knowledge).
['Muthu Ramachandran']
Software reuse guidelines
533,979
Several premium automotive brands offer night vision systems to enhance the driver's ability to see at night. Most recent generation night vision systems have added pedestrian detection as a feature to assist drivers to avoid potential collisions. This paper reviews pedestrian detection based on two different sensing technologies: active night vision operating in the near-infrared (NIR) region of the electromagnetic spectrum, and passive night vision operating in the far-infrared (FIR) spectrum. It also discusses the pros and cons of each type of night-vision system with-respect-to the pedestrian detection capability, the effectiveness for collision avoidance, and the commercial attractiveness. The paper introduces an enhancement to the NIR active lighting scheme that significantly improves the pedestrian detection performance. With improved pedestrian detection performance, we argue that the NIR night vision system is more effective at improving night-time driving safety and may achieve broader market acceptance.
['Yun Luo', 'Jeffrey Thomas Remillard', 'Dieter Hoetzer']
Pedestrian detection in near-infrared night vision system
360,208
We present a system for interactively querying a topical resources repository and for navigating the set of responses obtained from the repository. Our system makes use of a domain ontology for dynamically constructing a Gallois lattice. Nodes in this structure are resources clusters which are interactively examined by the user during the construction process.
['Brigitte Safar', 'Hassen Kefi']
Domain ontology and Galois lattice structure for query refinement
346,911
Understanding bike trip patterns in a bike sharing system is important for researchers designing models for station placement and bike scheduling. By bike trip patterns, we refer to the large number of bike trips observed between two stations. However, due to privacy and operational concerns, bike trip data are usually not made publicly available. In this paper, instead of relying on time-consuming surveys and inaccurate simulations, we attempt to infer bike trip patterns directly from station status data, which are usually public to help riders find nearby stations and bikes. However, the station status data do not contain information about where the bikes come from and go to, therefore the same observations on stations might correspond to different underlying bike trips. To address this challenge, We conduct an empirical study on a sample bike trip dataset to gain insights about the inner structure of bike trips. We then formulate the trip inference problem as an ill-posed inverse problem, and propose a regularization technique to incorporate the a priori information about bike trips to solve the problem. We evaluate our method using real-world bike sharing datasets from Washington, D.C. Results show that our method effectively infers bike trip patterns.
['Longbiao Chen', 'Jérémie Jakubowicz']
Inferring bike trip patterns from bike sharing system open data
587,952
Although classically-adopted PID-cascade postural control schemes are adequate for describing effects of external disturbances acting on upright stance, it is shown in this article that they fall short to address the issue of voluntary motion due to the inherent instability of uncontrolled upright posture. A novel alternative, “hybrid cascade-feedback scheme,” is presented and shown to be equivalent to the PID-cascade scheme in terms of external disturbances but overcome its shortcomings related to voluntary motion. This proposed scheme is based on a well-established robust tracking and disturbance rejection control method. It can be modularly extended to cover multi-input-multi-output scenarios through employing state-space tools.
['Karim A. Tahboub']
A control-theoretic approach for human postural control modeling
895,465
In this paper, a novel view invariant person identification method based on human activity information is proposed. Unlike most methods proposed in the literature, in which “walk” (i.e., gait) is assumed to be the only activity exploited for person identification, we incorporate several activities in order to identify a person. A multicamera setup is used to capture the human body from different viewing angles. Fuzzy vector quantization and linear discriminant analysis are exploited in order to provide a discriminant activity representation. Person identification, activity recognition, and viewing angle specification results are obtained for all the available cameras independently. By properly combining these results, a view-invariant activity-independent person identification method is obtained. The proposed approach has been tested in challenging problem setups, simulating real application situations. Experimental results are very promising.
['Alexandros Iosifidis', 'Anastasios Tefas', 'Ioannis Pitas']
Activity-Based Person Identification Using Fuzzy Representation and Discriminant Learning
56,592
In this study, we focus on a technique that embeds an electronically readable watermark directly on printed materials. This technique is used in the single-dot pattern method proposed by Kaneda et al. In the existing method, we extend a media size to A4 and improve an extraction rate over 99%. In this paper, if we change the type of printer, we verified the difference of basically the same the information extraction results. We also apply this technique on paper-based document with foreground and get successful embedding and extracting information without error by using error-correcting codes.
['Kitahiro Kaneda', 'Hiroyasu Kitazawa', 'Keiichi Iwamura', 'Isao Echizen']
A Study of Equipment Dependence of a Single-Dot Pattern Method for an Information-Hiding by Applying an Error-Correcting Code
705,534
The quality of teaching performance is a primary index for the assessment of high-grade education in China and/or in other countries of the world. As a new type of high-grade educational institutions in China, independent colleges have been confronting many difficulties and blocks in practice, which greatly affects the teaching quality of the independent colleges. By analyzing the teaching-evaluation process of our college in recent years, the author pointed out the existing deficiencies in the performance assessment for independent college teachers and suggested countermeasures to the problems.
['Ai-Qun Yu']
Analysis of performance assessment for teachers of independent colleges in China
601,478
A novel Takagi-Sugeno-Kang (TSK) type fuzzy neural network which uses general type-2 fuzzy sets in a type-2 fuzzy logic system, called general type-2 fuzzy neural network (GT2FNN), is proposed for function approximation. The problems of constructing a GT2FNN include type reduction, structure identification, and parameter identification. An efficient strategy is proposed by using α-cuts to decompose a general type-2 fuzzy set into several interval type-2 fuzzy sets to solve the type reduction problem. Incremental similarity-based fuzzy clustering and linear least squares regression are combined to solve the structure identification problem. Regarding the parameter identification, a hybrid learning algorithm (HLA) which combines particle swarm optimization (PSO) and recursive least squares (RLS) estimator is proposed for refining the antecedent and consequent parameters, respectively, of fuzzy rules. Simulation results show that the resulting networks obtained are robust against outliers.
['Wen-Hau Roger Jeng', 'Chi-Yuan Yeh', 'Shie-Jue Lee']
General type-2 fuzzy neural network with hybrid learning for function approximation
268,172
All-interval series is a standard benchmark problem for constraint satisfaction search. An all-interval series of size n is a permutation of integers [0, n) such that the differences between adjacent integers are a permutation of [1, n). Generating each such all-interval series of size n is an interesting challenge for constraint community. The problem is very difficult in terms of the size of the search space. Different approaches have been used to date to generate all the solutions of AIS but the search space that must be explored still remains huge. In this paper, we present a constraint-directed backtracking-based tree search algorithm that performs efficient lazy checking rather than immediate constraint propagation. Moreover, we prove several key properties of all-interval series that help prune the search space significantly. The reduced search space essentially results into fewer backtracking. We also present scalable parallel versions of our algorithm that can exploit the advantage of having multi-core processors and even multiple computer systems. Our new algorithm generates all the solutions of size up to 27 while a satisfiability-based state-of-the-art approach generates all solutions up to size 24.
['Masbaul Alam Polash', 'M. A. Hakim Newton', 'Abdul Sattar']
Constraint-directed search for all-interval series
957,170
We describe various methods designed to discover knowledge in the GenBank nucleic acid sequence database. Using a grammatical model of gene structure, we create a parse tree of a gene using features listed in the FEATURE TABLE. The parse tree infers features that are not explicitly listed, but which follow from the listed features. This method discovers 30% more introns and 40% more exons when applied to a globin gene subset of GenBank. Parse tree construction also entails resolving ambiguity and inconsistency within a FEATURE TABLE. We transform the parse tree into an augmented FEATURE TABLE that represents inferred gene structure explicitly and unambiguously, thereby greatly improving the utility of the FEATURE TABLE to researchers. We then describe various analogical reasoning techniques designed to exploit the homologous nature of genes. We build a classification hierarchy that reflects the evolutionary relationship between genes. Descriptive grammars of gene classes are then induced from the instance grammars of genes. Case based reasoning techniques use these abstract gene class descriptions to predict the presence and location of regulatory features not listed in the FEATURE TABLE. A cross-validation test shows a success rate of 87% on a globin gene subset of GenBank.
['Jeffery S. Aaronson', 'Juergen Haas', 'G. Christian Overton']
Knowledge Discovery in GENBANK
169,200
Military operations heavily rely on tactical data links to exchange command and control data allowing participants to gain an understanding of the operational situation. Since today most operations involve multiple nations cooperating on a mission these tactical data links are standardised e.g. as a NATO STANAG. However these standards are very complex and regularly updated to reflect technological advances. Therefore implementing TDL systems adhering to these standards is a very complex and error prone process. By supporting a NATO group developing an XML implementation of TDL standards and providing a toolset to implement a multi TDL capable system configured using these XML-TDL-specifications, the authors demonstrate how to shorten this process and allow for a completely automated implementation of TDL-STANAGs.
['Tobias Eggendorfer', 'Volker Eiseler']
Towards automatic implementation of TDL systems
961,583
We present a non-destructive watermelon classification method using Mel-Frequency Cepstrum Coefficients (MFCC) and Multi-Layer Perceptron (MLP) neural network. Acoustic signals were collected from thumping noises of ripe and unripe watermelon fruits. MFCC was then used to convert the signals into MFCC coefficients. The coefficients were then used to train a MLP, and the MLP gives the final decision on the watermelon ripeness state. In our paper, we describe the methods used to obtain the acoustic samples, as well as the evaluation of several MLP structures and parameters to obtain the best MLP classifier. Our results show that the proposed method was able to discriminate between ripe and unripe watermelons with 77.25% accuracy.
['Shah Rizam Mohd Shah Baki', 'Ihsan Mohd Yassin', 'A. Hassan Hasliza', 'A. Zabidi']
Non-destructive classification of watermelon ripeness using Mel-Frequency Cepstrum Coefficients and Multilayer Perceptrons
421,091
A Scalable Farm Skeleton for Heterogeneous Parallel Programming
['Steffen Ernsting', 'Herbert Kuchen']
A Scalable Farm Skeleton for Heterogeneous Parallel Programming
746,334
The paper proposes mvTANTs, three-level networks with multiple-valued inputs and binary outputs. These networks are a generalization of binary TANTs (Three level And Not networks with True Inputs). One of possible interpretations of mvTANT is a four-level binary network with input decoders which realize multiple-valued literals. Similar to mvPLAs, mvTANTs have regular structures with predictable timing. Compared with mvPLAs, however, they have at least 25% less input wires to the third-level (NAND) plane and not more outputs from the second-level (AND) plane than the mvPLA. Thus, in many cases they have less gates and connections, and are useful to minimize Boolean functions in cellular FPGAs and other regular structures. >
['Marek A. Perkowski', 'Malgorzata Chrzanowska-Jeske']
Multiple-valued-input TANT networks
471,596
In the near future, mobile nodes (MNs) need to automatically select an optimal interface based on communication quality in a multimodal environment, that is, multimodal communication. While the existing handover management schemes avoid the communication termination, they can not satisfy (1) seamless communication and (2) selection of an optimal wireless network based on communication quality, and thus its performance during handover is drastically decreased. In our previous work, we proposed a new handover management scheme that employs (a) cross-layer approach and (b) multiple TCP connections to satisfy (1) and (2), and showed its effectiveness through simulation experiments. However, any other existing studies have not yet proposed a handover management scheme that employs both (a) cross-layer approach by exploiting beacon messages and (b) multiple TCP connections, and demonstrated the effectiveness of them on a real system. Thus, in this paper, we describe the implementation design of our proposed scheme in detail, and then show preliminary results for the fundamental performance of the proposed prototype system in a real wireless environment.
['Kazuya Tsukamoto', 'Takeshi Yamaguchi', 'Shigeru Kashihara', 'Yuji Oie']
Implementation Design of Handover Management Scheme for Efficient Multimodal Communication
16,845
The keyhole condition, where the MIMO channel has only one degree of freedom, impairs the performance of MIMO systems. Thus, one may wish to design codes that are robust to this condition. So far, a general analysis of space-time codes in keyhole conditions has not been available (except in the special case of orthogonal space-time block codes). This work provides pairwise error probabilities for general space-time codes in keyhole condition. We present design criteria in high SNR, providing guidelines for codes that are robust to keyhole conditions. Also included is the proof of the intuitive result that the diversity under keyhole condition is min (M, N), where M and N are the number of transmit and receive antennas, with a slightly unexpected twist in the case of M=N.
['Shahab Sanayei', 'Ahmadreza Hedayat', 'Aria Nosratinia']
Space Time Codes in Keyhole Channels: Analysis and Design
476,253
This paper presents a generalized eigen-combining algorithm for an adaptive array system that provides a diversity gain in angular spread, and a maximum signal-to-interference ratio (SIR) in the presence of strong interference. The proposed technique generates the maximum ratio of signal components distributed in the spatial channel subspace using channel bases that span the signal subspace and nullify the interference. Through extensive computer simulations, it is shown that the proposed algorithm represents a breakthrough in preventing performance saturation caused by strong co-channel interference.
['Seungheon Hyeon', 'Seungwon Choi']
Generalized eigen-combining algorithm for adaptive array systems in a co-channel interference environment
245,910
Guerilla VR
['Josephine Anstey']
Guerilla VR
684,197
Tactile codes and devices are widely used in virtual reality, robotics, and telemanipulation by blind and sighted people. Based upon the Braille code, we provide a rigorous analysis of tactile spatial codes, and propose a code that is improved in the information theoretical sense. Based upon the criterion of a minimum average code word weight, we derive a code that provides improved reliability and longer lifetime of tactile devices. Such an approach can be also employed for robot hands, and tactile feedback in virtual reality and dextrous telemanipulation.
['Danilo P. Mandic', 'Richard William Harvey', 'Djemal H. Kolonic']
On the choice of tactile code
346,717
Proposal for Driver Distraction Indexes Using Biological Signals Including Eye Tracking
['Nobumichi Takahashi', 'Satoshi Inoue', 'H. Seki', 'Shuhei Ushio', 'Yukou Saito', 'Koyo Hasegawa', 'Michiko Ohkura']
Proposal for Driver Distraction Indexes Using Biological Signals Including Eye Tracking
642,984
Typically, most research and academic institutions own and archive a great amount of objects and research related resources that have been produced, used and maintained over long periods of time by different types of domain experts (e.g. lecturers and researchers). Although the potential educational value of these resources is very high, this potential may largely be underused due to severe accessibility and manipulability constraints. The virtualization of these resources, i.e. their representation as reusable digital learning objects that can be integrated in an e-learning environment, would allow the full exploitation of all their educational potential. In this paper we describe the process model that we have followed during the virtualization of the objects and research resources owned by two academic museums at the Complutense University of Madrid (Spain). In the context of this model we also summarize the main aspects of these experiences in virtualization.
['José Luis Sierra', 'Alfredo Fernández-Valmayor', 'Mercedes Guinea', 'Héctor Hernanz']
From Research Resources to Learning Objects: Process Model and Virtualization Experiences
300,513
This paper is devoted to the problem of error detection with quantum codes. We show that it is possible to give a consistent definition of the undetected error event. To prove this, we examine possible problem settings for quantum error detection. Our goal is to derive a functional that describes the probability of undetected error under natural physical assumptions concerning transmission with error detection with quantum codes. We discuss possible transmission protocols with stabilizer and unrestricted quantum codes. The set of results proved in the paper shows that in all the cases considered the average probability of undetected error for a given code is essentially given by one and the same function of its weight enumerators. We examine polynomial invariants of quantum codes and show that coefficients of Rains's (see ibid., vol44, p.1388-94, 1998) "unitary weight enumerators" are known for classical codes under the name of binomial moments of the distance distribution. As in the classical situation, these enumerators provide an alternative expression for the probability of undetected error.
['Alexei E. Ashikhmin', 'Alexander Barg', 'E. Knill', 'Simon Litsyn']
Quantum error detection .I. Statement of the problem
272,747
Trajectories of moving objects have been an active research topic for over a decade. Classical approaches to analyze the trajectories of these objects are mainly based on large amounts of data acquired from positioning devices, such GPS receivers. GPS data has the advantage of describing the trajectory of an object with a great level of detail and high accuracy, but the data do not carry any kind of semantic information. On the other hand, nowadays it is growing the number of interactions in social networks with some kind of information about the location of the user. Georeferenced social interactions are an important source of semantic about user's trajectories and activities. This work proposes a solution for reconstructing travel histories using heterogeneous social track sources posts in social networks, GPS positioning data, location history data generated by cloud services or any digital footprint with an associated geographic position. The solution encompasses a conceptual model; a methodology to reconstruct travel histories based on heterogeneous social tracks sources; and an application to present the reconstructed travel itinerary in a graphical and interactive fashion. An experiment conducted with real travel experiences showed that the proposed solution is a reasonable way to reconstruct travels histories, geographically and semantically, in an automatic fashion.
['Amon Veiga Santana', 'Jorge Campos']
Travel History: Reconstructing Semantic Trajectories Based on Heterogeneous Social Tracks Sources
945,996
In the context of an information search task, does the visual salience of items interact with information scent? That is, do things like bold headlines or highlighted phrases interact with local semantic cues about the usefulness of distal sources of information? Most research on visual search and highlighting has used stimuli with no semantic content, while studies on information search have assumed equal visual salience of items in the search space. In real information environments like the Web, however, these things do not occur in isolation. Thus, we used a laboratory study to examine how these factors interact. The almost perfectly additive results imply that good information scent cannot overcome poor visual cues, or vice versa, and that both factors are equally important.
['Franklin P. Tamborello', 'Michael D. Byrne']
Information search: the intersection of visual and semantic space
434,256
In this paper, we present a novel informative path planning algorithm using an active sensor for efficient environmental monitoring. While the state-of-the-art algorithms find the optimal path in a continuous space using sampling-based planning method, such as rapidly-exploring random graphs (RRG), there are still some key limitations, such as computational complexity and scalability. We propose an efficient information gathering algorithm using an RRG and a stochastic optimization method, cross entropy (CE), to estimate the reachable information gain at each node of the graph. The proposed algorithm maintains the asymptotic optimality of the RRG planner and finds the most informative path satisfying the cost constraint. We demonstrate that the proposed algorithm finds a (near) optimal solution efficiently compared to the state-of-the-art algorithm and show the scalability of the proposed method. In addition, the proposed method is applied to multi-robot informative path planning.
['Junghun Suh', 'Kyunghoon Cho', 'Songhwai Oh']
Efficient graph-based informative path planning using cross entropy
971,156
ABSTRACTThis paper aims at addressing the problem of global adaptive stabilisation by output feedback for a class of nonlinear systems, in which both the input and output are logarithmically quantised. The nonlinear functions of the systems are not necessary to be completely known and contain time-varying parameters that belong to an unknown bounded set. Based on dynamic high-gain technique, a linear-like quantised controller computed from quantised output is constructed and a guideline is derived to select the parameters of the quantisers. It is proved that, with the proposed scheme, all the states of the system can be globally steered to the origin while keeping the other signals of the closed-loop system bounded.
['Guangqi Li', 'Yan Lin']
Adaptive output feedback control for a class of nonlinear systems with quantised input and output
723,815
Software implementation of an Attribute-Based Encryption scheme.
['Eric Zavattoni', 'Luis J. Dominguez Perez', 'Shigeo Mitsunari', 'Ana H. Sánchez-Ramírez', 'Tadanori Teruya', 'Francisco Rodríguez-Henríquez']
Software implementation of an Attribute-Based Encryption scheme.
791,078
Alpha matting, aiming at extraction of foreground elements from a natural image by means of color and opacity (alpha) estimation, is one of the key techniques for image editing and film production. However, matting is inherently an ill-posed problem. Many matting approaches perform poorly with the complex natural images. A new matting algorithm is proposed. The proposed method involves expansion of known regions, selection of foreground and background colors for unknown pixels, determination of the confidence for these samples and optimization of the initial alpha by minimizing the object function. Finally, the experiments on the benchmark images are carried out and the results show that compared with some popular matting, the proposed method performs better.
['Rui Huang', 'Xiang Wang']
A new alpha matting for nature image
332,781
The proposed study introduces a total least squares with structure selection (TLSS) algorithm to identify continuous time differential equation models from generalized frequency response function matrix (GFRFM) of multiple-input multiple-output (MIMO) nonlinear system. The estimation procedure is progressive where the parameters of each degree of nonlinearity of each subsystem is estimated beginning with the estimation of linear terms and then adding higher order nonlinear terms. The algorithm combines the advantages of both the total least squares and orthogonal least squares with structure selection (OLSSS). The error reduction ratio (ERR) feature of OLSSS are exploited to provide an effective way of detecting the correct model structure or which terms to include into the model and the total least squares algorithm provides accurate estimates of the parameters when the data is corrupted with noise. The performance of the algorithm has been compared with the weighted complex orthogonal estimator and has been shown to be superior.
['Akshya Swain', 'Cheng-shun Lin', 'Eduardo M. A. M. Mendes']
Frequency Domain Identification of Multiple Input Multiple Output Nonlinear Systems
444,703
Overcoming comprehension barriers in the AspectJ programming language
['Venera Arnaoudova', 'Laleh Mousavi Eshkevari', 'Elaheh Safari-Sharifabadi', 'Constantinos Constantinides']
Overcoming comprehension barriers in the AspectJ programming language
399,163
New sensor technologies open possibilities for measuring traditional biosignals in new innovative ways. This, together with the development of signal processing systems and their increasing computing power, can sometimes give new life to old measurement techniques. Ballistocardiogram (BCG) is one such technique, originally promising but later replaced by the now very popular electrocardiogram. It's usability was previously limited by the large size of the devices required to record it, and the complex nature of the recorded signal, which gave little information in visual inspection. In this paper, we present how a lightweight and flexible electromechanical film (EMFi) sensor can be used to record BCG. A ballistocardiographic chair, designed to look like a normal office chair, was built and fitted with two sensitive EMFi sensors. Two different measurement setups to record the signal from the EMFi sensors were developed. The first, so-called wired setup, uses a commercial bio-amplifier, and a special pre-amplifier to interface to it. The latter, so-called wireless setup, uses our own hardware to transmit the recorded digitized signals wirelessly to a nearby PC. Both of these systems are presented and their performance evaluated. Also, the suitability, limitations and advantages of the EMFi sensor over existing sensors and methods are discussed. The validity of the EMFi sensor and amplifier output is tested using a mechanical vibrator. Lastly, a summary of signal analysis methods developed for our system is given. The developed systems have be used for medical BCG measurements, and the recordings indicate that the both the systems are functional and capture useful BCG signal components.
['Sakari Junnila', 'Alireza Akhbardeh', 'Alpo Värri']
An Electromechanical Film Sensor Based Wireless Ballistocardiographic Chair: Implementation and Performance
349,549
A linear time invariant system with uncertain initial conditions, perturbed parameters, and active disturbance signals operates in open loop as a result of feedback failure or interruption. The objective is to find an optimal input signal that drives the system for the longest time without exceeding specified error bounds, to allow maximal time for feedback reactivation. It is shown that such a signal exists, and that it can be replaced by a bang-bang signal without significantly affecting performance. The use of bang-bang signals simplifies calculation and implementation.
['Debraj Chakraborty', 'Jacob Hammer']
Optimal low error control of disturbed systems
89,769
Stochastic bounds on distributions of optimal value functions with applications to pert, network flows and reliability
['Gideon Weiss']
Stochastic bounds on distributions of optimal value functions with applications to pert, network flows and reliability
328,715
In this work we determine the expected number of vertices of degree k = k(n) in a graph with n vertices that is drawn uniformly at random from a subcritical graph class. Examples of such classes are outerplanar, series-parallel, cactus and clique graphs. Moreover, we provide exponentially small bounds for the probability that the quantities in question deviate from their expected values.
['Nicla Bernasconi', 'Konstantinos Panagiotou', 'Angelika Steger']
The degree sequence of random graphs from subcritical classes
406,039
Goals and Scenarios to Software Product Lines: the GS2SPL Approach.
['Gabriela Guedes', 'Carla T. L. L. Silva', 'Jaelson Castro']
Goals and Scenarios to Software Product Lines: the GS2SPL Approach.
792,834
Memory built-in self-test (BIST) is a critical portion of the chip design and electronic design automation (EDA) flow. A BIST tool needs to understand the memory at the topological and layout levels in order to test for the correct fault models. The BIST also needs to be fully integrated into the overall EDA flow in order to have the least impact on chip area and have the greatest ease of use to the chip designer.
['R. Dean Adams', 'Robert Abbott', 'Xiaoliang Bai', 'Dwayne Burek', 'Eric MacDonald']
An integrated memory self test and EDA solution
419,270
Reliability-Based Optimization Using Evolutionary Algorithms
['Kalyanmoy Deb', 'Shubham Gupta', 'David A. Daum', 'Jürgen Branke', 'Abhishek Kumar Mall', 'Dhanesh Padmanabhan']
Reliability-Based Optimization Using Evolutionary Algorithms
631,571
While the rapid progress in smart city technologies are changing cities and the lifestyle of the people, there are increasingly enormous challenges in terms of the safety and security of smart cities. The potential vulnerabilities of e-government products and imminent attacks on smart city infrastructure and services will have catastrophic consequences on the governments and can cause substantial economic and noneconomic losses, even chaos, to the cities and their residents. This paper aims to explore alternative economic solutions ranging from incentive mechanisms to market-based solutions to motivate smart city product vendors, governments, and vulnerability researchers and finders to improve the cybersecurity of smart cities.
['Zhen Li', 'Qi Liao']
An Economic Alternative to Improve Cybersecurity of E-government and Smart Cities
815,023
This article introduces fractional Fourier transform (FrFT) to speech enhancement, A novel algorithm is proposed for the estimation of FrFT order. The determination of the optimal FrFT order is a crucial issue for FrFT. We use the information of pitches, harmonies and formants in correlogram of Gammatone filterbank to get a few candidates of the transform order. The proposed method reduces the computation complexity in the searching of optimal transform order. The experimental results of speech enhancement show that the proposed method is superior to the conventional spectral subtraction in the sense of SNR improvement of the enhanced speeches and the Itakura-Saito distance of LPC coefficients.
['Duo-jia Ma', 'Xiang Xie', 'Jingming Kuang']
A novel algorithm of seeking FrFT order for speech enhancement
523,469
Motor imagery-based BCI (MI-BCI) technology possesses the potential to be a post-stroke rehabilitation tool. To ensure the optimal use of the MI-BCI technology for stroke rehabilitation, the ability to measure the motor recovery patterns is important. In this study, the relationship between the EEG recorded during, and the changes in the recovery patterns before and after MI-BCI rehabilitation is investigated. Nine stroke patients underwent 10 sessions of 1 hour MI-BCI rehabilitation with robotic feedback for 2 weeks, 5 times a week. The coherence index (0 ≤ CI ≤ 1), which is an EEG metric comparing the coherences of the EEG in the ipsilesioned hemisphere with that in the contralesioned hemisphere, was computed for each session for the first week. Pre- and post-rehabilitation motor functions were measured with the Fugl-Meyer assessment (FMA). The number of sessions with CI greater than a unique subject-dependent baseline value ζ correlated with the change in the FMA scores (R = 0.712, p = 0.031). Subsequently, a leave-one-out approach resulted in a prediction mean squared error (MSE) of 15.1 using the established relationship. This result is better compared to using the initial FMA score as a predictor, which gave a MSE value of 18.6. This suggests that CI computed from EEG may have a prognostic value for measuring the motor recovery for MI-BCI.
['Sau Wai Tung', 'Cuntai Guan', 'Kai Keng Ang', 'Kok Soon Phua', 'Chuanchu Wang', 'Christopher Wee Keong Kuah', 'Karen Sui Geok Chua', 'Yee Sien Ng', 'Ling Zhao', 'Effie Chew']
A measurement of motor recovery for motor imagery-based BCI using EEG coherence analysis
728,901
In developing multi-UAV (Unmanned Aerial Vehicle) system, a simulation environment is essential to verify the functionalities of the whole system with higher productivity and reduced risks of accidents. Simulations of multi-UAV systems are usually accomplished by using distributed systems where multiple standalone simulators are connected via a network. Thus, in order to achieve an improved reality in such distributed simulation environments, the whole multi-UAV simulator should be guaranteed real-time and synchronized behaviors among the connected systems. In this paper, we discuss about the necessity of two important real-time features, i.e., the periodicity of computations and the clock synchronization among standalone simulators. We also propose an architecture based on TMO(Time-triggered and Message-triggered Object) model as a multi-UAV simulation platform with our preliminary experimental results showing its potential feasibilities.
['Seung-Hwa Song', 'Doo-Hyun Kim', 'Chun-Hyon Chang']
Experimental Reliability Analysis of Multi-UAV Simulation with TMO-Based Distributed Architecture and Global Time Synchronization
105,680
Current moving object detection systems typically detect shadows cast by the moving object as part of the moving object. In this paper, the problem of separating moving cast shadows from the moving objects in an outdoor environment is addressed. Unlike previous work, we present an approach that does not rely on any geometrical assumptions such as camera location and ground surface/object geometry. The approach is based on a new spatio-temporal albedo test and dichromatic reflection model and accounts for both the sun and the sky illuminations. Results are presented for several video sequences representing a variety of ground materials when the shadows are cast on different surface types. These results show that our approach is robust to widely different background and foreground materials, and illuminations.
['Sohail Nadimi', 'Bir Bhanu']
Physical models for moving shadow and object detection in video
122,941
In this paper, we consider the classical contention resolution problem in which an unknown subset of n possible nodes are activated and connected to a shared channel. The problem is solved in the first round that an active node transmits alone (thus breaking symmetry). Contention resolution has been an active research topic for over four decades. Accordingly, tight upper and lower bounds are known for most major model assumptions. There remains, however, an important case that is unresolved: contention resolution with multiple channels and collision detection. (Tight bounds are known for contention resolution with multiple channels, and contention resolution with collision detection, but not for the combination of both assumptions.) Recent work proved the first non-trivial lower bound for randomized solutions to this problem in this setting. The optimality of this lower bound was left an open question. In this paper, we answer this open question by describing and analyzing new contention resolution algorithms that match, or come within a log\log\log n factor of matching, this bound for all relevant parameters. By doing so, we help advance our understanding of an important longstanding problem. Of equal importance, our solutions introduce a novel new technique in which we leverage a distributed structure we call coalescing cohorts to simulate a well-known parallel search strategy from the structured PRAM CREW model in our unstructured distributed model. We conjecture that this approach is relevant to many problems in the increasingly important setting of distributed computation using multiple shared channels.
['Jeremy T. Fineman', 'Calvin C. Newport', 'Tonghe Wang']
Contention Resolution on Multiple Channels with Collision Detection
853,279
The subspace source localization approach, i.e., first principle vectors (FINE), is able to enhance the spatial resolvability and localization accuracy for closely-spaced neural sources from EEG and MEG measurements. Computer simulations were conducted to evaluate the performance of the FINE algorithm in an inhomogeneous realistic geometry head model under a variety of conditions. The source localization abilities of FINE were examined at different cortical regions and at different depths. The present computer simulation results indicate that FINE has enhanced source localization capability, as compared with MUSIC and RAP-MUSIC, when sources are closely spaced, highly noise-contaminated, or inter-correlated. The source localization accuracy of FINE is better, for closely-spaced sources, than MUSIC at various noise levels, i.e., signal-to-noise ratio (SNR) from 6 dB to 16 dB, and RAP-MUSIC at relatively low noise levels, i.e., 6 dB to 12 dB. The FINE approach has been further applied to localize brain sources of motor potentials, obtained during the finger tapping tasks in a human subject. The experimental results suggest that the detailed neural activity distribution could be revealed by FINE. The present study suggests that FINE provides enhanced performance in localizing multiple closely spaced, and inter-correlated sources under low SNR, and may become an important alternative to brain source localization from EEG or MEG
['Lei Ding', 'Bin He']
Spatio-temporal EEG source localization using a three-dimensional subspace FINE approach in a realistic geometry inhomogeneous head model
318,131
This work considers the online sensor selection for the finite-horizon sequential hypothesis testing. In particular, at each step of the sequential test, the “most informative” sensor is selected based on all the previous samples so that the expected sample size is minimized. In addition, certain sensors cannot be used more than their prescribed budgets on average. Under this setup, we show that the optimal sensor selection strategy is a time-variant function of the running hypothesis posterior, and the optimal test takes the form of a truncated sequential probability ratio test. Both of these operations can be obtained through a simplified version of dynamic programming. Numerical results demonstrate that the proposed online approach outperforms the existing offline approach to the order of magnitude.
['Shang Li', 'Xiaoou Li', 'Xiaodong Wang', 'Jingchen Liu']
Optimal sequential test with finite horizon and constrained sensor selection
882,547
In this paper, we show the economic value of flexibility in standards by linking the business concept of market uncertainty to the technical aspects of designing standards and implementing network-based services based on them. We quantify our theory with a options-like model, showing how to maximize overall gain from the market's point of view when creating standards used to build network-based services in highly uncertain markets. We show how network architectures based on standards that promote parallel experimentation such as the end-2-end argument have a greater chance of providing network-based services that meet an uncertain market because of the added value from the ability to innovate. This model provides a framework to understand the tradeoffs between being able to experiment, market uncertainty, business and technical advantages to services with centralized management and how service providers learn from past generations of the service.
['Mark Gaynor', 'Scott Bradner', 'Marco Iansiti', 'H. T. Kung']
The real options approach to standards for building network-based services
326,921
The abundance of unlicensed spectrum in the 60 GHz band makes it an attractive alternative for future wireless communication systems. Such systems are expected to provide data transmission rates in the order of multi-gigabits per second in order to satisfy the ever-increasing demand for high rate data communication. Unfortunately, 60 GHz radio is subject to severe path loss which limits its usability for long-range outdoor communication. In this work, we propose a multi-hop 60 GHz wireless network for outdoor communication where multiple full-duplex buffered relays are used to extend the communication range while providing end-to-end performance guarantees to the traffic traversing the network. We provide a cumulative service process characterization for the 60 GHz outdoor propagation channel with self-interference in terms of the moment generating function (MGF) of its channel capacity. We then use this characterization to compute probabilistic upper bounds on the overall network performance, i.e., total backlog and end-to-end delay. Furthermore, we study the effect of self-interference on the network performance and propose an optimal power allocation scheme to mitigate its impact in order to enhance network performance. Finally, we investigate the relation between relay density and network performance under a total power budget constraint. We show that increasing relay density may have adverse effects on network performance unless self-interference can be kept sufficiently small.
['Guang Yang', 'Ming Xiao', 'Hussein Al-Zubaidy', 'Yongming Huang', 'James Gross']
Analysis of Multi-Hop Outdoor 60 GHz Wireless Networks with Full-Duplex Buffered Relays
839,804
This paper presents a new blind adaptive channel equalizer, which is based on two well-defined cost functions, the constant modulus algorithm (CMA) [1] and the alphabet- matched algorithm (AMA) [2]. The equalizer takes into account both the amplitude and the phase of the equalizer output and is adaptive based on the received channel data. The new equalization scheme presented here is compared to the multimodulus algorithm (MMA), which has been proposed in [3,4] as an improved equalizer for non-constant modulus signals, such as QAM. It is shown to provide better performance than the MMA in terms of both average mean- squared error (MSE) and average symbol error rate (SER) for the channels tested.
['Antoinette Beasley', 'Arlene Cole-Rhodes']
SPC06-2: Blind Adaptive Equalization for QAM Signals Using an Alphabet-Matched Algorithm
347,945
Hybrid computational phantoms offer unique advantages for the construction of diverse anthropomorphic models. In this paper, a methodology is presented for the construction of patient-dependent phantoms built around anthropometric distributions of the U.S. adult and pediatric populations. The methodology relies on the flexibility of hybrid phantoms to match target anthropometric parameters as determined from National Center for Health Statistics databases. Target parameters as defined in this paper include the primary parameters such as standing height, sitting height, and total body mass; and secondary parameters such as waist, buttocks, arm, and thigh circumference. As a demonstration of this methodology, the UF hybrid adult male (UFHADM) and UF hybrid 10-year-old female (UFH10F) were selected as representative anchor phantoms for this study and were subsequently remodeled to create 25 different adult male and 15 different pediatric female patient-dependent phantoms. The phantoms were evaluated based on appearance and internal organ masses. Aesthetically, the phantoms appear correct and display characteristics of a diverse population including variability in body shape and standing/sitting height. Organ masses display several general trends, including a gradual increase with both standing height and subject weight. Selected organ masses from the UFHADM series were also compared with published correlations taken from a 2001 French-based autopsy study. The organ masses were located well within the statistical deviation presented in the autopsy study and followed similar trends when correlated with both standing height and body mass index.
['Perry Johnson', 'Scott Whalen', 'Michael Wayson', 'Badal Juneja', 'Choonsik Lee', 'Wesley E. Bolch']
Hybrid Patient-Dependent Phantoms Covering Statistical Distributions of Body Morphometry in the U.S. Adult and Pediatric Population
495,922
We show that the one can consider proof of the Gentzen's LK as the continuation passing style(CPS) programs; and the cut-elimination procedure for LK as computation. To be more precise, we observe that Strongly Normalizable(SN) and Church-Rosser(CR) cut-elimination procedure for (intuitionistic decoration of) LKT and LKQ, as presented in Danos et al.(1993), precisely corresponds to call-by-name(CBN) and call-by-value(CBV) CPS calculi, respectively. This can also be seen as an extension to classical logic of Zucker-Pottinger-Mints investigation of the relations between cut-elimination and normalization.
['Ichiro Ogata']
Cut Elimination for Classical Proofs as Continuation Passing Style Computation
529,941
Forecasting by regression is a very important method for the prediction of continuous values. Generally, in order to increase the predictive accuracy and reliability, as many factors or features as possible are considered and added to the regression model, however, this leads to the poor efficiency, accuracy, and interpretability. Besides, some existing methods associated with support vector regression (SVR) usually require us to solve the convex quadratic programming problem with a higher computational complexity. In this paper, we proposed a novel two-phase multi-kernel SVR using linear programming method (MK-LP-SVR) for feature sparsification and forecasting so as to solve the aforementioned problems. The multi-kernel learning method is mainly utilized to carry out feature sparsification and find the important features by computing their contribution to forecasting while the whole model can be used to predict output values for given inputs. Based on a simulation, 6 small, and 6 big data sets, the experimental results and comparison with SVR, linear programming SVR (LP-SVR), least squares SVR (LS-SVR), and multiple kernel learning SVR (MKL-SVR) showed that our proposed model has considerably improved predictive accuracy and interpretability for the regression forecasting on the independent test sets. A novel MK-LP-SVR model is proposed for feature sparsification and prediction.Multi-kernel method reduces dimensionality and gains an interpretable regression.The proposed method statistically significantly outperforms SVR and LS-SVR.The new model obtains the better predictive accuracies on different scale datasets.
['Zhiwang Zhang', 'Guangxia Gao', 'Yingjie Tian', 'Jue Yue']
Two-phase multi-kernel LP-SVR for feature sparsification and forecasting
825,258
Solid model abstraction is an integral part of parallel and process design. It is required for simulation and optimisation of the design and consists of retrieving a simplified model from the solid one, with appropriate dimension reduction and details removal. Unfortunately, current CAD systems do not provide the means for easy simplification of forms. Ongoing research efforts on abstraction yield some possible approaches. These attempt to generate the abstract model using expert systems, medial axis transforms or medial surfaces. A feature-based approach is presented. Analysts usually think about objects in terms of sets of forms rather than in terms of geometrical and topological entities. So, given an object and its description in terms of form features, the approach constructs a simplified model of this object using morphological information of the form features. The abstraction process has two parts: simplification and idealization. The simplification part removes any non-pertinent features from the initial model, while the idealization part idealizes the resulting objects according to the goal of the analysis.
['Mohamed Belaziz', 'Abdelaziz Bouras', 'Jean-Marc Brun']
Solid model abstraction using form features
4,036
Modèle et langage de composition de services.
['Philippe Ramadour', 'Myriam Fakhri']
Modèle et langage de composition de services.
731,808
Media analysis for video indexing is witnessing an increasing influence of statistical techniques. Examples of these techniques include the use of generative models as well as discriminant techniques for video structuring, classification, summarization, indexing and retrieval. Advances in multimedia analysis are related directly to advances in signal processing, computer vision, pattern recognition, multimedia databases and smart sensors. This paper highlights the statistical techniques in multimedia retrieval with particular emphasis on semantic characterization.
['Milind R. Naphade']
Statistical techniques in video data management
492,262
In this paper, we address the problem of video multicasting in ad hoc wireless networks. The salient characteristics of video traffic make conventional multicasting protocols perform quite poorly, hence warranting application-centric approaches in order to increase robustness to packet losses and lower the overhead. By exploiting the path-diversity and the error resilience properties of Multiple Description Coding (MDC), we propose a Robust Demand-driven Video Multicast Routing (RDVMR) protocol. Our protocol uses a novel path based Steiner tree heuristic to reduce the number of forwarders in each tree, and constructs multiple trees in parallel with reduced number of common nodes among them. Moreover, unlike other on-demand multicast protocols, RDVMR specifically attempts to reduce the periodic (non on-demand) control traffic. We extensively evaluate RDVMR in the NS2 simulation framework and show that it out-performs existing single-tree and two-tree multicasting protocols.
['D. Agrawal', 'Tirupathi Reddy', 'C.S.R. Murthy']
Robust Demand-Driven Video Multicast over Ad hoc Wireless Networks
327,101
We introduce a mixture model whereby each mixture component is itself a mixture of a multivariate Gaussian distribution and a multivariate uniform distribution. Although this model could be used for model-based clustering (model-based unsupervised learning) or model-based classification (model-based semi-supervised learning), we focus on the more general model-based classification framework. In this setting, we fit our mixture models to data where some of the observations have known group memberships and the goal is to predict the memberships of observations with unknown labels. We also present a density estimation example. A generalized expectation-maximization algorithm is used to estimate the parameters and thereby give classifications in this mixture of mixtures model. To simplify the model and the associated parameter estimation, we suggest holding some parameters fixed-this leads to the introduction of more parsimonious models. A simulation study is performed to illustrate how the model allows for bursts of probability and locally higher tails. Two further simulation studies illustrate how the model performs on data simulated from multivariate Gaussian distributions and on data from multivariate t-distributions. This novel approach is also applied to real data and the performance of our approach under the various restrictions is discussed.
['Ryan P. Browne', 'Paul D. McNicholas', 'Matthew D. Sparling']
Model-Based Learning Using a Mixture of Mixtures of Gaussian and Uniform Distributions
106,132
Visualization of flow on boundary surfaces from computational flow dynamics (CFD) is challenging due to the complex, adaptive resolution nature of the meshes used in the modeling and simulation process. This paper presents a fast and simple glyph placement algorithm in order to investigate and visualize flow data based on unstructured, adaptive resolution boundary meshes from CFD. The algorithm has several advantages: (1) Glyphs are automatically placed at evenly-spaced intervals. (2) The user can interactively control the spatial resolution of the glyph placement and their precise location. (3) The algorithm is fast and supports multiresolution visualization of the flow at surfaces. The implementation supports multiple representations of the flow‐some optimized for speed others for accuracy. Furthermore the approach doesn’t rely on any pre-processing of the data or parameterization of the surface and handles large meshes efficiently. The result is a tool that provides engineers with a fast and intuitive overview of their CFD simulation results.
['Zhenmin Peng', 'Robert S. Laramee']
Vector glyphs for surfaces: A fast and simple glyph placement algorithm for adaptive resolution meshes
378,691
A Crowdsourcing Practices Framework for Public Scientific Research Funding Agencies
['Kieran Conboy', 'Eoin Cullina', 'Lorraine Morgan']
A Crowdsourcing Practices Framework for Public Scientific Research Funding Agencies
908,398
Classical methods to model topological properties of point clouds, such as the Vietoris-Rips complex, suffer from the combinatorial explosion of complex sizes. We propose a novel technique to approximate a multi-scale filtration of the Rips complex with improved bounds for size: precisely, for n points in R^d, we obtain a O(d)-approximation with at most n2^{O(d log k)} simplices of dimension k or lower. In conjunction with dimension reduction techniques, our approach yields a O(polylog (n))-approximation of size n^{O(1)} for Rips filtrations on arbitrary metric spaces. This result stems from high-dimensional lattice geometry and exploits properties of the permutahedral lattice, a well-studied structure in discrete geometry. #R##N##R##N#Building on the same geometric concept, we also present a lower bound result on the size of an approximate filtration: we construct a point set for which every (1+epsilon)-approximation of the Cech filtration has to contain n^{Omega(log log n)} features, provided that epsilon < 1/(log^{1+c}n) for c in (0,1).
['Aruni Choudhary', 'Michael Kerber', 'Sharath Raghvendra']
Polynomial-Sized Topological Approximations Using The Permutahedron
601,874
Designing low-latency cloud-based applications that are adaptable to unpredictable workloads and efficiently utilize modern cloud computing platforms is hard. The actor model is a popular paradigm that can be used to develop distributed applications: actors encapsulate state and communicate with each other by sending events. Consistency is guaranteed if each event only accesses a single actor, thus eliminating potential data races and deadlocks. However it is nontrivial to provide consistency for concurrent events spanning across multiple actors. This paper addresses this problem by introducing AEON: a framework that provides the following properties: (i) Programmability: programmers only need to reason about sequential semantics when reasoning about concurrency resulting from multi-actor events; (ii) Scalability: AEON runtime protocol guarantees serializable and starvation-free execution of multi-actor events, while maximizing parallel execution; (iii) Elasticity: AEON supports fine-grained elasticity enabling the programmer to transparently migrate individual actors without violating the consistency or entailing significant performance overheads. Our empirical results show that it is possible to combine the best of all the above three worlds without compromising on the application performance.
['Bo Sang', 'Gustavo Petri', 'Masoud Saeida Ardekani', 'Srivatsan Ravi', 'Patrick Eugster']
Programming Scalable Cloud Services with AEON
941,971
In this paper the joint effects of the inter-cell macrodiversity schemes combined with intra-cell spatial multiplexing are investigated for MIMO-OFDM systems. We compare the SINR and throughput performances of cellular Alamouti, cellular cyclic delay diversity (CDD) schemes and pure macrodiversity at the cell edge. Our simulation results indicate that cellular Alamouti scheme outperforms cellular CDD scheme, and the CDD has similar performance as the pure macrodiversity scheme. This result is useful to determine the suitable macrodiversity scheme when spatial multiplexing are implemented at the user terminals.
['Hsien-Wen Chang', 'Li-Chun Wang', 'Zhe-Hua Chou']
Macrodiversity Antenna Combining for MIMO-OFDM Cellular Mobile Networks in Supporting Multicast Traffic
157,209
Equivalence and dominance relations used earlier in fault diagnosis procedures are defined as relations between faults, similar to the relations used for fault collapsing. Since the basic entity of diagnostic fault simulation and test generation is a fault pair, and not a single fault, we introduce a framework where equivalence and dominance relations are defined for fault pairs. Using equivalence and dominance relations between fault pairs, we define a fault pair collapsing process, where fault pairs are removed from consideration under diagnostic fault simulation and test generation since they are guaranteed to be distinguished when other fault pairs are distinguished. Another concept, which was used earlier to enhance fault collapsing, is the level of similarity between faults. We extend this definition into a level of similarity between fault pairs and discuss its use for fault pair collapsing. The level of similarity encompasses equivalence and dominance relations between fault pairs, and extends them to allow additional fault pair collapsing.
['Irith Pomeranz', 'Sudhakar M. Reddy']
Equivalence, Dominance, and Similarity Relations between Fault Pairs and a Fault Pair Collapsing Process for Fault Diagnosis
475,653
The large-scale integration of intermittent energy sources, the introduction of shiftable load elements and the growing interconnection that characterizes electricity systems worldwide have led to a significant increase of operational uncertainty. The construction of suitable statistical models is a fundamental step towards building Monte Carlo analysis frameworks to be used for exploring the uncertainty state-space and supporting real-time decision-making. The main contribution of the present paper is the development of novel composite modelling approaches that employ dimensionality reduction, clustering and parametric modelling techniques with a particular focus on the use of pair copula construction schemes. Large power system datasets are modelled using different combinations of the aforementioned techniques, and detailed comparisons are drawn on the basis of Kolmogorov-Smirnov tests, multivariate two-sample energy tests and visual data comparisons. The proposed methods are shown to be superior to alternative high-dimensional modelling approaches.
['Mingyang Sun', 'Ioannis Konstantelos', 'Simon H. Tindemans', 'Goran Strbac']
Evaluating composite approaches to modelling high-dimensional stochastic variables in power systems
866,989
In this paper, we first propose a general framework for fuzzy causal networks (FCNs). Then, we study the dynamics and convergence of such general FCNs. We prove that any general FCN with constant weight matrix converges to a limit cycle or a static state, or the trajectory of the FCN is not repetitive. We also prove that under certain conditions a discrete state general FCN converges to its limit cycle or static state in O(n) steps, where n is the number of vertices of the FCN. This is in striking contrast with the exponential running time 2/sup n/, which is accepted widely for classic FCNs.
['Sanming Zhou', 'Zhi-Qiang Liu', 'Jian Ying Zhang']
Fuzzy causal networks: general model, inference, and convergence
296,526
Due to frequency selective fading, modern wideband 802.11 transmissions have unevenly distributed bit BERs in a packet. In this paper, we propose to unequally protect packet bits according to their BERs. By doing so, we can best match the effective transmission rate of each bit to channel condition, and improve throughput. The major design challenge lies in deriving an accurate relationship between the frequency selective channel condition and the decoded packet bit BERs, all the way through the complex 802.11 PHY layer. Based on our study, we find that the decoding error of a packet bit corresponds to dense errors in the underlying codeword bits, and the BER can be truthfully approximated by the codeword bit error density. With above observation, we propose UnPKT, scheme that protects packet bits using different MAC-layer FEC redundancies based on bit-wise BER estimation to augment wide-band 802.11 transmissions. UnPKT is software-implementable and compatible with the existing 802.11 architecture. Extensive evaluations based on Atheros 9580 NICs and GNU-Radio platforms show the effectiveness of our design. UnPKT can achieve a significant goodput improvement over state-of-the-art approaches.
['Yaxiong Xie', 'Zhenjiang Li', 'Mo Li', 'Kyle Jamieson']
Augmenting wide-band 802.11 transmissions via unequal packet bit protection
845,644
Zdzislaw Pawlak (1926-2006).
['Andrzej Ehrenfeucht', 'James F. Peters', 'Grzegorz Rozenberg', 'Andrzej Skowron']
Zdzislaw Pawlak (1926-2006).
796,255
Causal graphs, such as directed acyclic graphs (DAGs) and partial ancestral graphs (PAGs), represent causal relationships among variables in a model. Methods exist for learning DAGs and PAGs from data and for converting DAGs to PAGs. However, these methods only output a single causal graph consistent with the independencies/dependencies (the Markov equivalence class $M$) estimated from the data. However, many distinct graphs may be consistent with $M$, and a data modeler may wish to select among these using domain knowledge. In this paper, we present a method that makes this possible. We introduce PAG2ADMG, the first method for enumerating all causal graphs consistent with $M$, under certain assumptions. PAG2ADMG converts a given PAG into a set of acyclic directed mixed graphs (ADMGs). We prove the correctness of the approach and demonstrate its efficiency relative to brute-force enumeration.
['Nishant Subramani', 'Doug Downey']
PAG2ADMG: A Novel Methodology to Enumerate Causal Graph Structures
952,225
Zero Shot Learning (ZSL) enables a learning model to classify instances of an unseen class during training. While most research in ZSL focuses on single-label classification, few studies have been done in multi-label ZSL, where an instance is associated with a set of labels simultaneously, due to the difficulty in modeling complex semantics conveyed by a set of labels. In this paper, we propose a novel approach to multi-label ZSL via concept embedding learned from collections of public users' annotations of multimedia. Thanks to concept embedding, multi-label ZSL can be done by efficiently mapping an instance input features onto the concept embedding space in a similar manner used in single-label ZSL. Moreover, our semantic learning model is capable of embedding an out-of-vocabulary label by inferring its meaning from its co-occurring labels. Thus, our approach allows both seen and unseen labels during the concept embedding learning to be used in the aforementioned instance mapping, which makes multi-label ZSL more flexible and suitable for real applications. Experimental results of multi-label ZSL on images and music tracks suggest that our approach outperforms a state-of-the-art multi-label ZSL model and can deal with a scenario involving out-of-vocabulary labels without re-training the semantics learning model.
['Ubai Sandouk', 'Ke Chen']
Multi-Label Zero-Shot Learning via Concept Embedding
815,193
The Discrete Cosine Transform (DCT) is widely used in lossy image and video compression schemes such as JPEG and MPEG. In this paper we describe RD-OPT, an efficient algorithm for constructing DCT quantization tables with optimal rate-distortion tradeoffs for a given image. The algorithm uses DCT coefficient distribution statistics in a novel way and uses a dynamic programming strategy to produce optimal quantization tables over a wide range of rates and distortions. It can be used to compress images at any desired signal-to-noise ratio or compressed size.
['Viresh Ratnakar', 'Miron Livny']
RD-OPT: an efficient algorithm for optimizing DCT quantization tables
91,520
This paper describes a solver programming method, called contractor programming, that copes with two issues related to constraint processing over the reals. First, continuous constraints involve an inevitable step of solver design. Existing softwares provide an insufficient answer by restricting users to choose among a list of fixed strategies. Our first contribution is to give more freedom in solver design by introducing programming concepts where only configuration parameters were previously available. Programming consists in applying operators (intersection, composition, etc.) on algorithms called contractors that are somehow similar to propagators. Second, many problems with real variables cannot be cast as the search for vectors simultaneously satisfying the set of constraints, but a large variety of different outputs may be demanded from a set of constraints (e.g., a paving with boxes inside and outside of the solution set). These outputs can actually be viewed as the result of different contractors working concurrently on the same search space, with a bisection procedure intervening in case of deadlock. Such algorithms (which are not strictly speaking solvers) will be made easy to build thanks to a new branch & prune system, called paver. Thus, this paper gives a way to deal harmoniously with a larger set of problems while giving a fine control on the solving mechanisms. The contractor formalism and the paver system are the two contributions. The approach is motivated and justified through different cases of study. An implementation of this framework named Quimper is also presented.
['Gilles Chabert', 'Luc Jaulin']
Contractor programming
715,066
In this paper, a fuzzy feedback control design problem with a mixed H 2 /H ∞ H 2 / H ∞ performance is addressed by using the distributed proportional-spatial integral (P-sI) control approach for a class of nonlinear distributed parameter systems represented by semi-linear parabolic partial differential-integral equations (PDIEs). The objective of this paper is to develop a fuzzy distributed P-sI controller with a mixed H 2 /H ∞ H 2 / H ∞ performance index for the semi-linear parabolic PDIE system. To do this, the semi-linear parabolic PDIE system is first assumed to be exactly represented by a Takagi–Sugeno (T–S) fuzzy parabolic PDIE model in a given local domain of Hilbert space. Then, based on the T–S fuzzy PDIE model, a distributed fuzzy P-sI state feedback controller is proposed such that the closed-loop PDIE system is locally exponentially stable with a mixed H 2 /H ∞ H 2 / H ∞ performance. The sufficient condition on the existence of the fuzzy controller is given by using the Lyapunov's direct method, the technique of integration by parts, and vector-valued Wirtinger's inequalities, and presented in terms of standard linear matrix inequalities (LMIs). Moreover, by using the existing LMI optimization techniques, a suboptimal H ∞ H ∞ fuzzy controller is derived in the sense of minimizing an upper bound of a given H 2 H 2 performance function. Finally, the developed design methodology is successfully applied to feedback control of a semi-linear reaction–diffusion system with spatial integral terms.
['Jun-Wei Wang', 'Huai-Ning Wu', 'Yao Yu', 'Chang-Yin Sun']
Mixed H 2 / H ∞ fuzzy proportional-spatial integral control design for a class of nonlinear distributed parameter systems
643,606
First responders need to be able to have the most amount of situational awareness of the operations they perform while not being overwhelmed by all of the information that may be made available to them. This paper presents the Rover integration and fusion platform to alleviate the fusing of multiple information sources, which may not be known prior to the beginning of an operation. Rover can aid fusing sources by providing as much contextual information which often will be forgotten when designing programs, mapping contextual information into a view in which programs and users can use, and automate well-known and designed tasks. All of this occurs while deploying Rover to first responder incident scenes without the need for an existing network infrastructure prior to an incident.
['Christian B. Almazan', 'Moustafa Youssef', 'Ashok K. Agrawala']
Rover: An Integration and Fusion Platform to Enhance Situational Awareness
176,278
Multiple watermarking is the technique that permits to embed multiple data into the same media content. In this paper, a new multiple audio watermarking system is discussed. Our main objective is to increase the capacity of watermark information, under the inaudibility constraint, while maintaining a high degree of robustness against several disturbances. At the emitter, code division multiple access CDMA is used to embed multiple watermarks in the time domain of the host signal. At the receiver, we consider the detection process as a matter of blind source separation BSS between the audio signal and the embedded data. Two BSS techniques have been tested in the new multi-watermarking system: under-determined independent subspace analysis UISA and Gaussian mixture models GMM. Experimental results exhibit that inaudibility and high robustness of hidden data can be achieved simultaneously with a substantial increase of the amount of watermark information to a multiplicative factor of 4.
['Mohammed Khalil', 'Abdellah Adib']
Multiple audio watermarking system based on CDMA
852,456
We first present new concepts applicable to the design of multimedia search and retrieval schemes in general, and to MPEG-7 in particular, the multimedia description standard in progress. Raw multimedia data is assumed to exist in the form of programs that typically consist of a combination of media types such as visual, audio, and text. We partition each such media stream into smaller units based on actual physical events. These physical events within each media stream can then be effectively indexed for retrieval. The concept of logical events is introduced next; we define logical events as those that can provide different "views" of the content as may be desired by a user. Such events usually result from either the correlation of events that cross different media types, of by merging recursively chosen events from a lower level within each media type. We then address the related issue of how to develop a practical multimedia information retrieval system that exploits the aforementioned concepts of physical and logical events as well as other aspects such as storage, representation and indexing to enable efficient search, retrieval, and browsing. Finally, we implement the proposed concepts and solutions within a multimedia system that addresses a real application, effective browsing of broadcast news, and evaluate its performance.
['Qian Huang', 'Atul Puri', 'Zhu Liu']
Multimedia search and retrieval: new concepts, system implementation, and application
206,335
Unsupervised relation detection using automatic alignment of query patterns extracted from knowledge graphs and query click logs.
['Panupong Pasupat', 'Dilek Hakkani-Tür']
Unsupervised relation detection using automatic alignment of query patterns extracted from knowledge graphs and query click logs.
804,581
Many industries experience an explosion in digital content. This explosion of electronic documents, along with new regulations and document retention rules, sets new requirements for performance efficiency of traditional data protection and archival tools. During a backup session a predefined set of objects (client filesystems) should be backed up. Traditionally, no information on the expected duration and throughput requirements of different backup jobs is provided. This may lead to a suboptimal job schedule that results in the increased backup session time. In this work, we characterize each backup job via two metrics, called job duration and job throughput. These metrics are derived from collected historic information about backup jobs during previous backup sessions. Our goal is to automate the design of a backup schedule that minimizes the overall completion time for a given set of backup jobs. This problem can be formulated as a resource constrained scheduling problem where a set of n jobs should be scheduled on m machines with given capacities. We provide an integer programming (IP) formulation of this problem and use available IP-solvers for finding an optimized schedule, called binpacking schedule. Performance benefits of the new bin-packing schedule are evaluated via a broad variety of realistic experiments using backup processing data from six backup servers in HP Labs. The new bin-packing job schedule significantly optimizes the backup session time (20%–60% of backup time reduction). HP Data Protector (DP) is HP's enterprise backup offering and it can directly benefit from the designed technique. Moreover, significantly reduced backup session times guarantee an improved resource/power usage of the overall backup solution.
['Ludmila Cherkasova', 'Alex Zhang', 'Xiaozhou Li']
DP+IP = design of efficient backup scheduling
484,379
This paper focuses on the reconstruction of complete state information in all the individual nodes of a complex network dynamical system, at a supervisory level. Sliding mode observers are designed for this purpose. The proposed network observer is inherently robust, nonlinear and can accommodate time-varying coupling strengths and switching topologies, provided the number of nodes remain fixed. At the supervisory level, decentralised control signals are computed based on the state estimates in order to operate the network of dynamical systems in synchrony. A network of Chua circuits with six nodes is used to demonstrate the novelty of the proposed approach.
['Christopher Edwards', 'Prathyush P. Menon']
State reconstruction in complex networks using sliding mode observers
139,048
On the basis of a videocorpus of human-robot-interactions in a museum guide scenario and with a combined approach of qualitative case analysis and quantitative evaluation we examine two research questions: (1) What type of group dynamics result from a robot's unexpected next move — especially focussing on gaze patterns? (2) What should a robot detect when searching for trouble events and how should it identify patterns of visitor behavior that are related to the robots actions? Systematically studying opening sequences, we extract structural gaze patterns out of two cases considered from an interactional linguistics perspective as well as keeping in mind a system's capabilities of perception.
['Raphaela Gehle', 'Karola Pitsch', 'Timo Dankert', 'Sebastian Wrede']
Trouble-based group dynamics in real-world HRI — Reactions on unexpected next moves of a museum guide robot
315,157
The quantitative estimation of the attenuation coefficient slope (ACS) has the potential to differentiate between healthy and pathological tissues. However, attempts to characterize ACS maps using pulse-echo data using methods such as the spectral log difference (SLD) technique have been limited by the large variability of the estimates. In the present work, ACSs were estimated using a regularized SLD technique. The performance of the proposed approach was experimentally evaluated using two physical phantoms: a homogeneous phantom, and a phantom with a cylindrical inclusion. The results obtained with the SLD and regularized SLD techniques were compared to the ACS values obtained with through-transmission techniques. In the homogeneous phantom, the use of regularization allowed reducing the standard deviation by more than 90% while keeping the estimation bias around 2%. For the inhomogenenous phantom, a trade-off between contrast-to-noise ratio (CNR) and estimation bias was observed. However, the use of regularization allowed nearly doubling the CNR from 0.54 to 0.97–1.29 when compared to the standard SLD, while achieving an estimation bias between 10% and 20%. The results suggest that the use of regularization methods can effectively reduce the variability of ACS estimation.
['Andres Coila', 'Julien Rouyer', 'Omar Zenteno', 'Roberto J. Lavarello']
A regularization approach for ultrasonic attenuation imaging
820,559
The acquisition of high-resolution retinal fundus images with a large field of view (FOV) is challenging due to technological, physiological and economic reasons. This paper proposes a fully automatic framework to reconstruct retinal images of high spatial resolution and increased FOV from multiple low-resolution images captured with non-mydriatic, mobile and video-capable but low-cost cameras. Within the scope of one examination, we scan different regions on the retina by exploiting eye motion conducted by a patient guidance. Appropriate views for our mosaicing method are selected based on optic disk tracking to trace eye movements. For each view, one super-resolved image is reconstructed by fusion of multiple video frames. Finally, all super-resolved views are registered to a common reference using a novel polynomial registration scheme and combined by means of image mosaicing. We evaluated our framework for a mobile and low-cost video fundus camera. In our experiments, we reconstructed retinal images of up to 30° FOV from 10 complementary views of 15° FOV. An evaluation of the mosaics by human experts as well as a quantitative comparison to conventional color fundus images encourage the clinical usability of our framework.
['Thomas Köhler', 'Axel Heinrich', 'Andreas K. Maier', 'Joachim Hornegger', 'Ralf-Peter Tornow']
Super-resolved retinal image mosaicing
626,874