diff --git "a/train.csv" "b/train.csv" new file mode 100644--- /dev/null +++ "b/train.csv" @@ -0,0 +1,12685 @@ +text,label +"abstract +the study of combinatorial optimization problems with a submodular objective has attracted +much attention in recent years. such problems are important in both theory and practice because +their objective functions are very general. obtaining further improvements for many submodular +maximization problems boils down to finding better algorithms for optimizing a relaxation of +them known as the multilinear extension. +in this work we present an algorithm for optimizing the multilinear relaxation whose guarantee improves over the guarantee of the best previous algorithm (which was given by ene +and nguyen (2016)). moreover, our algorithm is based on a new technique which is, arguably, +simpler and more natural for the problem at hand. in a nutshell, previous algorithms for this +problem rely on symmetry properties which are natural only in the absence of a constraint. our +technique avoids the need to resort to such properties, and thus, seems to be a better fit for +constrained problems.",8 +"abstract +numerous variants of self-organizing maps (soms) have been proposed in the literature, including +those which also possess an underlying structure, and in some cases, this structure itself can be defined +by the user although the concepts of growing the som and updating it have been studied, the whole +issue of using a self-organizing adaptive data structure (ads) to further enhance the properties of the +underlying som, has been unexplored. in an earlier work, we impose an arbitrary, user-defined, tree-like +topology onto the codebooks, which consequently enforced a neighborhood phenomenon and the so-called +tree-based bubble of activity (boa). in this paper, we consider how the underlying tree itself can be +rendered dynamic and adaptively transformed. to do this, we present methods by which a som with +an underlying binary search tree (bst) structure can be adaptively re-structured using conditional +rotations (conrot). these rotations on the nodes of the tree are local, can be done in constant time, +and performed so as to decrease the weighted path length (wpl) of the entire tree. in doing this, we +introduce the pioneering concept referred to as neural promotion, where neurons gain prominence in the +neural network (nn) as their significance increases. we are not aware of any research which deals with +the issue of neural promotion. the advantages of such a scheme is that the user need not be aware of any +of the topological peculiarities of the stochastic data distribution. rather, the algorithm, referred to as +the ttosom with conditional rotations (ttoconrot), converges in such a manner that the neurons +are ultimately placed in the input space so as to represent its stochastic distribution, and additionally, +the neighborhood properties of the neurons suit the best bst that represents the data. these properties +have been confirmed by our experimental results on a variety of data sets. we submit that all of these +concepts are both novel and of a pioneering sort.",9 +"abstract— we consider the problem of steering a system with +unknown, stochastic dynamics to satisfy a rich, temporallylayered task given as a signal temporal logic formula. we +represent the system as a markov decision process in which +the states are built from a partition of the statespace and +the transition probabilities are unknown. we present provably +convergent reinforcement learning algorithms to maximize the +probability of satisfying a given formula and to maximize the +average expected robustness, i.e., a measure of how strongly +the formula is satisfied. we demonstrate via a pair of robot +navigation simulation case studies that reinforcement learning +with robustness maximization performs better than probability +maximization in terms of both probability of satisfaction and +expected robustness.",3 +"abstract. we present high performance implementations of the qr and the singular value decomposition of a batch of small matrices hosted on the gpu with applications in the compression +of hierarchical matrices. the one-sided jacobi algorithm is used for its simplicity and inherent +parallelism as a building block for the svd of low rank blocks using randomized methods. we +implement multiple kernels based on the level of the gpu memory hierarchy in which the matrices can reside and show substantial speedups against streamed cusolver svds. the resulting +batched routine is a key component of hierarchical matrix compression, opening up opportunities +to perform h-matrix arithmetic efficiently on gpus.",8 +"abstract +skew bridges are common in highways and railway lines when non perpendicular crossings are encountered. the structural effect of skewness is an additional +torsion on the bridge deck which may have a considerable effect, making its +analysis and design more complex. in this paper, an analytical model following +3d beam theory is firstly derived in order to evaluate the dynamic response +of skew bridges under moving loads. following, a simplified 2d model is also +considered which includes only vertical beam bending. the natural frequencies, +eigenmodes and orthogonality relationships are determined from the boundary +conditions. the dynamic response is determined in time domain by using the +“exact” integration. both models are validated through some numerical examples by comparing with the results obtained by 3d fe models. a parametric +study is performed with the simplified model in order to identify parameters +that significantly influence the vertical dynamic response of the skew bridge under traffic loads. the results show that the grade of skewness has an important +influence on the vertical displacement, but hardly on the vertical acceleration of +the bridge. the torsional stiffness really has effect on the vertical displacement +when the skew angle is large. the span length reduces the skewness effect on +the dynamic behavior of the skew bridge. +keywords: skew bridge, bridge modelling, modal analysis, moving load",5 +"abstract +in recent years crowdsourcing has become the method of choice for gathering labeled training data +for learning algorithms. standard approaches to crowdsourcing view the process of acquiring labeled +data separately from the process of learning a classifier from the gathered data. this can give rise to +computational and statistical challenges. for example, in most cases there are no known computationally +efficient learning algorithms that are robust to the high level of noise that exists in crowdsourced data, +and efforts to eliminate noise through voting often require a large number of queries per example. +in this paper, we show how by interleaving the process of labeling and learning, we can attain computational efficiency with much less overhead in the labeling cost. in particular, we consider the realizable +setting where there exists a true target function in f and consider a pool of labelers. when a noticeable +fraction of the labelers are perfect, and the rest behave arbitrarily, we show that any f that can be efficiently learned in the traditional realizable pac model can be learned in a computationally efficient +manner by querying the crowd, despite high amounts of noise in the responses. moreover, we show that +this can be done while each labeler only labels a constant number of examples and the number of labels +requested per example, on average, is a constant. when no perfect labelers exist, a related task is to find +a set of the labelers which are good but not perfect. we show that we can identify all good labelers, when +at least the majority of labelers are good.",8 +"abstract +a novel method to identify trampoline skills using a single video camera is proposed herein. conventional computer vision techniques are used for identification, estimation, and tracking of the gymnast’s body +in a video recording of the routine. for each frame, an open source convolutional neural network is used to +estimate the pose of the athlete’s body. body orientation and joint angle estimates are extracted from these +pose estimates. the trajectories of these angle estimates over time are compared with those of labelled reference skills. a nearest neighbour classifier utilising a mean squared error distance metric is used to identify +the skill performed. a dataset containing 714 skill examples with 20 distinct skills performed by adult male +and female gymnasts was recorded and used for evaluation of the system. the system was found to achieve +a skill identification accuracy of 80.7% for the dataset.",1 +"abstract machines involved: +the net m of df a’s ms , ma , . . . with state set q = { q, . . . } (states are drawn as circular nodes); the pilot df a p with m-state set r = { i0 , i1 , . . . } (states are drawn as +rectangular nodes); and the dp da a to be next defined. as said, the dp da stores in +the stack the series of m-states entered during the computation, enriched with additional +information used in the parsing steps. moreover, the m-states are interleaved with terminal or nonterminal grammar symbols. the current m-state, i.e., the one on top of stack, +determines the next move: either a shift that scans the next token, or a reduction of a topmost stack segment (also called reduction handle) to a nonterminal identified by a final +candidate included in the current m-state. the absence of shift-reduce conflicts makes +the choice between shift and reduction operations deterministic. similarly, the absence of +reduce-reduce conflicts allows the parser to uniquely identify the final state of a machine. +however, this leaves open the problem to determine the stack segment to be reduced. for +that two designs will be presented: the first uses a finite pushdown alphabet; the second +uses unbounded integer pointers and, strictly speaking, no longer qualifies as a pushdown +automaton. +first, we specify the pushdown stack alphabet. since for a given net m there are finitely +many different candidates, the number of m-states is bounded and the number of candidates +in any m-state is also bounded by cmax = |q| × ( |σ| + 1 ). the dp da stack elements",6 +"abstract. we present an i/o-efficient algorithm for computing similarity joins based on locality-sensitive hashing (lsh). in contrast to +the filtering methods commonly suggested our method has provable subquadratic dependency on the data size. further, in contrast to straightforward implementations of known lsh-based algorithms on external memory, our approach is able to take significant advantage of the available +internal memory: whereas the time complexity of classical algorithms +includes a factor of n ρ , where ρ is a parameter of the lsh used, the i/o +complexity of our algorithm merely includes a factor (n/m )ρ , where n +is the data size and m is the size of internal memory. our algorithm is +randomized and outputs the correct result with high probability. it is a +simple, recursive, cache-oblivious procedure, and we believe that it will +be useful also in other computational settings such as parallel computation.",8 +"abstract. let a be a commutative ring, and let a be a weakly proregular +ideal in a. (if a is noetherian then any ideal in it is weakly proregular.) +suppose m is a compact generator of the category of cohomologically a-torsion +complexes. we prove that the derived double centralizer of m is isomorphic +to the a-adic completion of a. the proof relies on the mgm equivalence from +[psy] and on derived morita equivalence. our result extends earlier work of +dwyer-greenlees-iyengar [dgi] and efimov [ef].",0 +"abstract +though traditional algorithms could be embedded +into neural architectures with the proposed principle of (xiao, 2017), the variables that only occur +in the condition of branch could not be updated +as a special case. to tackle this issue, we multiply +the conditioned branches with dirac symbol (i.e. +1x>0 ), then approximate dirac symbol with the +continuous functions (e.g. 1 − e−α|x| ). in this +way, the gradients of condition-specific variables +could be worked out in the back-propagation process, approximately, making a fully functioned +neural graph. within our novel principle, we propose the neural decision tree (ndt), which takes +simplified neural networks as decision function +in each branch and employs complex neural networks to generate the output in each leaf. extensive experiments verify our theoretical analysis +and demonstrate the effectiveness of our model.",9 +"abstract +adams’ extension of parsing expression grammars enables specifying indentation sensitivity using +two non-standard grammar constructs — indentation by a binary relation and alignment. this +paper proposes a step-by-step transformation of well-formed adams’ grammars for elimination +of the alignment construct from the grammar. the idea that alignment could be avoided was +suggested by adams but no process for achieving this aim has been described before. +1998 acm subject classification d.3.1 formal definitions and theory; d.3.4 processors; f.4.2 +grammars and other rewriting systems +keywords and phrases parsing expression grammars, indentation, grammar transformation",6 +abstraction.,6 +"abstract +the coalitional manipulation problem has been studied extensively in the literature for many +voting rules. however, most studies have focused on the complete information setting, wherein +the manipulators know the votes of the non-manipulators. while this assumption is reasonable for purposes of showing intractability, it is unrealistic for algorithmic considerations. in +most real-world scenarios, it is impractical to assume that the manipulators to have accurate +knowledge of all the other votes. in this work, we investigate manipulation with incomplete +information. in our framework, the manipulators know a partial order for each voter that is +consistent with the true preference of that voter. in this setting, we formulate three natural +computational notions of manipulation, namely weak, opportunistic, and strong manipulation. +we say that an extension of a partial order is viable if there exists a manipulative vote for that extension. we propose the following notions of manipulation when manipulators have incomplete +information about the votes of other voters. +1. w eak m anipulation: the manipulators seek to vote in a way that makes their preferred +candidate win in at least one extension of the partial votes of the non-manipulators. +2. o pportunistic m anipulation: the manipulators seek to vote in a way that makes +their preferred candidate win in every viable extension of the partial votes of the nonmanipulators. +3. s trong m anipulation: the manipulators seek to vote in a way that makes their preferred +candidate win in every extension of the partial votes of the non-manipulators. +we consider several scenarios for which the traditional manipulation problems are easy (for +instance, borda with a single manipulator). for many of them, the corresponding manipulative +questions that we propose turn out to be computationally intractable. our hardness results often +hold even when very little information is missing, or in other words, even when the instances are +very close to the complete information setting. our results show that the impact of paucity of +information on the computational complexity of manipulation crucially depends on the notion +of manipulation under consideration. our overall conclusion is that computational hardness +continues to be a valid obstruction to manipulation, in the context of a more realistic model.",8 +"abstract +we propose a simple approach which, given +distributed computing resources, can nearly +achieve the accuracy of k-nn prediction, while +matching (or improving) the faster prediction +time of 1-nn. the approach consists of aggregating denoised 1-nn predictors over a small +number of distributed subsamples. we show, +both theoretically and experimentally, that +small subsample sizes suffice to attain similar +performance as k-nn, without sacrificing the +computational efficiency of 1-nn.",10 +"abstract. we consider graded artinian complete intersection algebras a = +c[x0 , . . . , xm ]/i with i generated by homogeneous forms of degree d ≥ 2. we +show that the general multiplication by a linear form µl : ad−1 → ad is +injective. we prove that the weak lefschetz property for holds for any c.i. +algebra a as above with d = 2 and m ≤ 4, previously known for m ≤ 3.",0 +"abstract +recurrent neural networks (rnns), particularly long +short-term memory (lstm), have gained much attention in +automatic speech recognition (asr). although some successful stories have been reported, training rnns remains +highly challenging, especially with limited training data. recent research found that a well-trained model can be used as +a teacher to train other child models, by using the predictions +generated by the teacher model as supervision. this knowledge transfer learning has been employed to train simple +neural nets with a complex one, so that the final performance +can reach a level that is infeasible to obtain by regular training. in this paper, we employ the knowledge transfer learning +approach to train rnns (precisely lstm) using a deep neural network (dnn) model as the teacher. this is different +from most of the existing research on knowledge transfer +learning, since the teacher (dnn) is assumed to be weaker +than the child (rnn); however, our experiments on an asr +task showed that it works fairly well: without applying any +tricks on the learning scheme, this approach can train rnns +successfully even with limited training data. +index terms— recurrent neural network, long shortterm memory, knowledge transfer learning, automatic speech +recognition +1. introduction +deep learning has gained significant success in a wide range +of applications, for example, automatic speech recognition +(asr) [1]. a powerful deep learning model that has been +reported effective in asr is the recurrent neural network +(rnn), e.g., [2, 3, 4]. an obvious advantage of rnns compared to conventional deep neural networks (dnns) is that +rnns can model long-term temporal properties and thus are +suitable for modeling speech signals. +a simple training method for rnns is the backpropagation through time algorithm [5]. this first-order approach, +this work was supported by the national natural science foundation of +china under grant no. 61371136 and the mestdc phd foundation project +no. 20130002120011. this paper was also supported by huilan ltd. and +sinovoice.",9 +"abstract— this paper considers the problem of implementing +a previously proposed distributed direct coupling quantum +observer for a closed linear quantum system. by modifying the +form of the previously proposed observer, the paper proposes +a possible experimental implementation of the observer plant +system using a non-degenerate parametric amplifier and a +chain of optical cavities which are coupled together via optical +interconnections. it is shown that the distributed observer +converges to a consensus in a time averaged sense in which an +output of each element of the observer estimates the specified +output of the quantum plant.",3 +"abstract. using valuation rings and valued fields as examples, we +discuss in which ways the notions of “topological ifs attractor” and +“fractal space” can be generalized to cover more general settings.",0 +"abstract +we consider the problem of reconstructing an unknown bounded function u defined on a domain x ⊂ +rd from noiseless or noisy samples of u at n points (xi )i=1,...,n . we measure the reconstruction error in a +norm l2 (x, dρ) for some given probability measure dρ. given a linear space vm with dim(vm ) = m ≤ n, +we study in general terms the weighted least-squares approximations from the spaces vm based on +independent random samples. it is well known that least-squares approximations can be inaccurate and +unstable when m is too close to n, even in the noiseless case. recent results from [4, 5] have shown the +interest of using weighted least squares for reducing the number n of samples that is needed to achieve an +accuracy comparable to that of best approximation in vm , compared to standard least squares as studied +in [3]. the contribution of the present paper is twofold. from the theoretical perspective, we establish +results in expectation and in probability for weighted least squares in general approximation spaces vm . +these results show that for an optimal choice of sampling measure dµ and weight w, which depends on the +space vm and on the measure dρ, stability and optimal accuracy are achieved under the mild condition +that n scales linearly with m up to an additional logarithmic factor. in contrast to [3], the present +analysis covers cases where the function u and its approximants from vm are unbounded, which might +occur for instance in the relevant case where x = rd and dρ is the gaussian measure. from the numerical +perspective, we propose a sampling method which allows one to generate independent and identically +distributed samples from the optimal measure dµ. this method becomes of interest in the multivariate +setting where dµ is generally not of tensor product type. we illustrate this for particular examples of +approximation spaces vm of polynomial type, where the domain x is allowed to be unbounded and high +or even infinite dimensional, motivated by certain applications to parametric and stochastic pdes.",10 +"abstract +in this paper, we explore different ways to extend a recurrent neural network +(rnn) to a deep rnn. we start by arguing that the concept of depth in an rnn +is not as clear as it is in feedforward neural networks. by carefully analyzing +and understanding the architecture of an rnn, however, we find three points of +an rnn which may be made deeper; (1) input-to-hidden function, (2) hidden-tohidden transition and (3) hidden-to-output function. based on this observation, we +propose two novel architectures of a deep rnn which are orthogonal to an earlier +attempt of stacking multiple recurrent layers to build a deep rnn (schmidhuber, 1992; el hihi and bengio, 1996). we provide an alternative interpretation +of these deep rnns using a novel framework based on neural operators. the +proposed deep rnns are empirically evaluated on the tasks of polyphonic music +prediction and language modeling. the experimental result supports our claim +that the proposed deep rnns benefit from the depth and outperform the conventional, shallow rnns.",9 +"abstract—today’s hpc applications are producing extremely large amounts of data, such that data storage and +analysis are becoming more challenging for scientific research. +in this work, we design a new error-controlled lossy compression algorithm for large-scale scientific data. our key +contribution is significantly improving the prediction hitting +rate (or prediction accuracy) for each data point based on +its nearby data values along multiple dimensions. we derive +a series of multilayer prediction formulas and their unified +formula in the context of data compression. one serious +challenge is that the data prediction has to be performed based +on the preceding decompressed values during the compression +in order to guarantee the error bounds, which may degrade +the prediction accuracy in turn. we explore the best layer +for the prediction by considering the impact of compression +errors on the prediction accuracy. moreover, we propose an +adaptive error-controlled quantization encoder, which can further improve the prediction hitting rate considerably. the data +size can be reduced significantly after performing the variablelength encoding because of the uneven distribution produced by +our quantization encoder. we evaluate the new compressor on +production scientific data sets and compare it with many other +state-of-the-art compressors: gzip, fpzip, zfp, sz-1.1, and +isabela. experiments show that our compressor is the best +in class, especially with regard to compression factors (or bitrates) and compression errors (including rmse, nrmse, and +psnr). our solution is better than the second-best solution +by more than a 2x increase in the compression factor and +3.8x reduction in the normalized root mean squared error on +average, with reasonable error bounds and user-desired bitrates.",7 +"abstract +in this paper we consider the problem of detecting a change in the parameters of an autoregressive process, where the moments of the innovation process +do not necessarily exist. an empirical likelihood ratio test for the existence +of a change point is proposed and its asymptotic properties are studied. in +contrast to other work on change point tests using empirical likelihood, we do +not assume knowledge of the location of the change point. in particular, we +prove that the maximizer of the empirical likelihood is a consistent estimator +for the parameters of the autoregressive model in the case of no change point +and derive the limiting distribution of the corresponding test statistic under +the null hypothesis. we also establish consistency of the new test. a nice +feature of the method consists in the fact that the resulting test is asymptotically distribution free and does not require an estimate of the long run +variance. the asymptotic properties of the test are investigated by means of",10 +abstract,6 +"abstract +pandemic influenza has the epidemic potential to kill millions of people. while various preventive measures exist (i.a., vaccination and school +closures), deciding on strategies that lead to their most effective and efficient use, remains challenging. to this end, individual-based epidemiological models are essential to assist decision makers in determining the +best strategy to curve epidemic spread. however, individual-based models are computationally intensive and therefore it is pivotal to identify +the optimal strategy using a minimal amount of model evaluations. additionally, as epidemiological modeling experiments need to be planned, +a computational budget needs to be specified a priori. consequently, we +present a new sampling method to optimize the evaluation of preventive +strategies using fixed budget best-arm identification algorithms. we use +epidemiological modeling theory to derive knowledge about the reward +distribution which we exploit using bayesian best-arm identification algorithms (i.e., top-two thompson sampling and bayesgap). we evaluate +these algorithms in a realistic experimental setting and demonstrate that +it is possible to identify the optimal strategy using only a limited number +of model evaluations, i.e., 2-to-3 times faster compared to the uniform +sampling method, the predominant technique used for epidemiological +decision making in the literature. finally, we contribute and evaluate a +statistic for top-two thompson sampling to inform the decision makers +about the confidence of an arm recommendation.",2 +"abstract +this paper deals with feature selection procedures for spatial point processes intensity estimation. we consider regularized versions of estimating +equations based on campbell theorem derived from two classical functions: +poisson likelihood and logistic regression likelihood. we provide general conditions on the spatial point processes and on penalty functions which ensure +consistency, sparsity and asymptotic normality. we discuss the numerical +implementation and assess finite sample properties in a simulation study. finally, an application to tropical forestry datasets illustrates the use of the +proposed methods.",10 +"abstract—wireless networked control systems (wncs) are +composed of spatially distributed sensors, actuators, and controllers communicating through wireless networks instead of +conventional point-to-point wired connections. due to their main +benefits in the reduction of deployment and maintenance costs, +large flexibility and possible enhancement of safety, wncs are +becoming a fundamental infrastructure technology for critical control systems in automotive electrical systems, avionics +control systems, building management systems, and industrial +automation systems. the main challenge in wncs is to jointly +design the communication and control systems considering their +tight interaction to improve the control performance and the +network lifetime. in this survey, we make an exhaustive review +of the literature on wireless network design and optimization for +wncs. first, we discuss what we call the critical interactive +variables including sampling period, message delay, message +dropout, and network energy consumption. the mutual effects of +these communication and control variables motivate their joint +tuning. we discuss the effect of controllable wireless network +parameters at all layers of the communication protocols on the +probability distribution of these interactive variables. we also +review the current wireless network standardization for wncs +and their corresponding methodology for adapting the network +parameters. moreover, we discuss the analysis and design of +control systems taking into account the effect of the interactive +variables on the control system performance. finally, we present +the state-of-the-art wireless network design and optimization for +wncs, while highlighting the tradeoff between the achievable +performance and complexity of various approaches. we conclude +the survey by highlighting major research issues and identifying +future research directions. +index terms—wireless networked control systems, wireless +sensor and actuator networks, joint design, delay, reliability, +sampling rate, network lifetime, optimization.",3 +"abstract. in this article, the projectivity of a finitely generated +flat module of a commutative ring is studied through its exterior +powers and invariant factors. consequently, the related results of +endo, vasconcelos, wiegand, cox-rush and puninski-rothmaler +on the projectivity of f.g. flat modules are generalized.",0 +abstract,2 +"abstract—this paper studies optimal tracking performance +issues for multi-input-multi-output linear time-invariant systems +under networked control with limited bandwidth and additive +colored white gaussian noise channel. the tracking performance +is measured by control input energy and the energy of the error +signal between the output of the system and the reference signal +with respect to a brownian motion random process. this paper +focuses on two kinds of network parameters, the basic network +parameter-bandwidth and the additive colored white gaussian +noise, and studies the tracking performance limitation problem. +the best attainable tracking performance is obtained, and the +impact of limited bandwidth and additive colored white gaussian +noise of the communication channel on the attainable tracking +performance is revealed. it is shown that the optimal tracking +performance depends on nonminimum phase zeros, gain at all +frequencies and their directions unitary vector of the given plant, +as well as the limited bandwidth and additive colored white +gaussian noise of the communication channel. the simulation +results are finally given to illustrate the theoretical results. +index terms—networked control systems, bandwidth, additive +colored white gaussian noise, performance limitation.",3 +"abstract +a group of order pn (p prime) has an indecomposable polynomial +invariant of degree at least pn−1 if and only if the group has a cyclic +subgroup of index at most p or it is isomorphic to the elementary +abelian group of order 8 or the heisenberg group of order 27. +keywords: polynomial invariants, degree bounds, zero-sum sequences",4 +"abstract. secret sharing is a cryptographic discipline in which the goal is +to distribute information about a secret over a set of participants in such a +way that only specific authorized combinations of participants together can +reconstruct the secret. thus, secret sharing schemes are systems of variables +in which it is very clearly specified which subsets have information about +the secret. as such, they provide perfect model systems for information +decompositions. however, following this intuition too far leads to an information +decomposition with negative partial information terms, which are difficult to +interpret. one possible explanation is that the partial information lattice +proposed by williams and beer is incomplete and has to be extended to +incorporate terms corresponding to higher order redundancy. these results +put bounds on information decompositions that follow the partial information +framework, and they hint at where the partial information lattice needs to be +improved.",7 +"abstract +permutation polynomials over finite fields are an interesting subject due to their important +applications in the areas of mathematics and engineering. in this paper we investigate the trinomial +f (x) = x(p−1)q+1 + xpq − xq+(p−1) over the finite field fq2 , where p is an odd prime and q = +pk with k being a positive integer. it is shown that when p = 3 or 5, f (x) is a permutation +trinomial of fq2 if and only if k is even. this property is also true for more general class of +polynomials g(x) = x(q+1)l+(p−1)q+1 + x(q+1)l+pq − x(q+1)l+q+(p−1) , where l is a nonnegative integer +and gcd(2l + p, q − 1) = 1. moreover, we also show that for p = 5 the permutation trinomials f (x) +proposed here are new in the sense that they are not multiplicative equivalent to previously known +ones of similar form. +index terms finite fields, permutation polynomials, trinomials, niho exponents, multiplicative inequivalent. +ams 94b15, 11t71",7 +"abstracts/definitions) to improve the answering ability of a +model. marino et al. (2017) explicitly incorporate knowledge graphs into an image classification +model. xu et al. (2016) created a recall mechanism into a standard lstm cell that retrieves pieces +of external knowledge encoded by a single representation for a conversation model. concurrently, +dhingra et al. (2017) exploit linguistic knowledge using mage-grus, an adapation of grus to +handle graphs, however, external knowledge has to be present in form of triples. the main difference to our approach is that we incorporate external knowledge in free text form on the word level +prior to processing the task at hand which constitutes a more flexible setup. ahn et al. (2016) exploit knowledge base facts about mentioned entities for neural language models. bahdanau et al. +(2017) and long et al. (2017) create word embeddings on-the-fly by reading word definitions prior +to processing the task at hand. pilehvar et al. (2017) seamlessly incorporate information about word +senses into their representations before solving the downstream nlu task, which is similar. we go +one step further by seamlessly integrating all kinds of fine-grained assertions about concepts that +might be relevant for the task at hand. +another important aspect of our approach is the notion of dynamically updating wordrepresentations. tracking and updating concepts, entities or sentences with dynamic memories is +a very active research direction (kumar et al., 2016; henaff et al., 2017; ji et al., 2017; kobayashi +et al., 2017). however, those works typically focus on particular tasks whereas our approach is taskagnostic and most importantly allows for the integration of external background knowledge. other +related work includes storing temporary information in weight matrices instead of explicit neural +activations (such as word representations) as a biologically more plausible alternative.",2 +"abstraction: at the highest level, the software architecture models the +m. bujorianu and m. fisher (eds.): +workshop on formal methods for aerospace (fma) +eptcs 20, 2010, pp. 80–87, doi:10.4204/eptcs.20.9",6 +"abstract. we prove that abelian subgroups of the outer automorphism group of a free group are quasiisometrically embedded. our proof uses recent developments in the theory of train track maps by feighnhandel. as an application, we prove the rank conjecture for out(fn ).",4 +"abstract +catroid is a free and open source visual programming language, +programming environment, image manipulation program, and +website. catroid allows casual and first-time users starting from +age eight to develop their own animations and games solely using +their android phones or tablets. catroid also allows to wirelessly +control external hardware such as lego mindstorms robots via +bluetooth, bluetooth arduino boards, as well as parrot’s popular +and inexpensive ar.drone quadcopters via wifi.",6 +"abstract +the map-reduce computing framework rose to prominence with datasets of such size that dozens +of machines on a single cluster were needed for individual jobs. as datasets approach the exabyte +scale, a single job may need distributed processing not only on multiple machines, but on multiple +clusters. we consider a scheduling problem to minimize weighted average completion time of n +jobs on m distributed clusters of parallel machines. in keeping with the scale of the problems +motivating this work, we assume that (1) each job is divided into m “subjobs” and (2) distinct +subjobs of a given job may be processed concurrently. +when each cluster is a single machine, this is the np-hard concurrent open shop problem. a +clear limitation of such a model is that a serial processing assumption sidesteps the issue of how +different tasks of a given subjob might be processed in parallel. our algorithms explicitly model +clusters as pools of resources and effectively overcome this issue. +under a variety of parameter settings, we develop two constant factor approximation algorithms for this problem. the first algorithm uses an lp relaxation tailored to this problem +from prior work. this lp-based algorithm provides strong performance guarantees. our second +algorithm exploits a surprisingly simple mapping to the special case of one machine per cluster. +this mapping-based algorithm is combinatorial and extremely fast. these are the first constant +factor approximations for this problem. +remark - a shorter version of this paper (one that omitted several proofs) appeared in the +proceedings of the 2016 european symposium on algorithms. +1998 acm subject classification f.2.2 nonnumerical algorithms and problems +keywords and phrases approximation algorithms, distributed computing, machine scheduling, +lp relaxations, primal-dual algorithms +digital object identifier 10.4230/lipics.esa.2016.234",8 +"abstract +the propagation of sound in a shallow water environment is characterized by boundary reflections from the sea surface and sea floor. +these reflections result in multiple (indirect) sound propagation +paths, which can degrade the performance of passive sound source +localization methods. this paper proposes the use of convolutional +neural networks (cnns) for the localization of sources of broadband acoustic radiated noise (such as motor vessels) in shallow +water multipath environments. it is shown that cnns operating +on cepstrogram and generalized cross-correlogram inputs are able +to more reliably estimate the instantaneous range and bearing of +transiting motor vessels when the source localization performance +of conventional passive ranging methods is degraded. the ensuing +improvement in source localization performance is demonstrated +using real data collected during an at-sea experiment. +index terms— source localization, doa estimation, convolutional neural networks, passive sonar, reverberation +1. introduction +sound source localization plays an important role in array signal processing with wide applications in communication, sonar and robotics +systems [1]. it is a focal topic in the scientific literature on acoustic array signal processing with a continuing challenge being acoustic source localization in the presence of interfering multipath arrivals [2, 3, 4]. in practice, conventional passive narrowband sonar +array methods involve frequency-domain beamforming of the outputs of hydrophone elements in a receiving array to detect weak signals, resolve closely-spaced sources, and estimate the direction of +a sound source. typically, 10-100 sensors form a linear array with +a uniform interelement spacing of half a wavelength at the array’s +design frequency. however, this narrowband approach has application over a limited band of frequencies. the upper limit is set by +the design frequency, above which grating lobes form due to spatial +aliasing, leading to ambiguous source directions. the lower limit is +set one octave below the design frequency because at lower frequencies the directivity of the array is much reduced as the beamwidths +broaden. +an alternative approach to sound source localization is to measure the time difference of arrival (tdoa) of the signal at an array of spatially distributed receivers [5, 6, 7, 8], allowing the instantaneous position of the source to be estimated. the accuracy of +the source position estimates is found to be sensitive to any uncertainty in the sensor positions [9]. furthermore, reverberation has an +adverse effect on time delay estimation, which negatively impacts +∗ work",7 +"abstract +we consider the ransac algorithm in the context of subspace recovery and subspace clustering. we derive some theory and perform some numerical experiments. we also draw some +correspondences with the methods of hardt and moitra (2013) and chen and lerman (2009b).",10 +"abstraction of a physical sensing device +b +located at position p and running program p . module m is the collection +of methods that the sensor makes available for internal and for external usage. typically this collection of methods may be interpreted as the library of +functions of the tiny operating system installed in the sensor. sensors may only +broadcast values to its neighborhood sensors. radius rt defines the transmitting +power of a sensor and specifies the border of communication: a circle centered +at position p (the position of the sensor) with radius rt . likewise, radius rs +defines the sensing capability of the sensor, meaning that a sensor may only +read values inside the circle centered at position p with radius rs . +values h~v ip define the field of measures that may be sensed. a value consists +of a tuple ~v denoting the strength of the measure at a given position p of the +plane. values are managed by the environment; in csn there are no primitives for manipulating values, besides reading (sensing) values. we assume that +the environment inserts these values in the network and update its contents. +networks are combined using the parallel composition operator | . +1",2 +"abstract—filtering and smoothing algorithms for linear +discrete-time state-space models with skew-t-distributed measurement noise are presented. the presented algorithms use a +variational bayes based posterior approximation with coupled +location and skewness variables to reduce the error caused by +the variational approximation. although the variational update +is done suboptimally, our simulations show that the proposed +method gives a more accurate approximation of the posterior +covariance matrix than an earlier proposed variational algorithm. +consequently, the novel filter and smoother outperform the +earlier proposed robust filter and smoother and other existing +low-complexity alternatives in accuracy and speed. we present +both simulations and tests based on real-world navigation data, +in particular gps data in an urban area, to demonstrate the +performance of the novel methods. moreover, the extension of the +proposed algorithms to cover the case where the distribution of +the measurement noise is multivariate skew-t is outlined. finally, +the paper presents a study of theoretical performance bounds for +the proposed algorithms.",3 +"abstract +in this paper, a discrete-time multi-agent system is presented which is formulated in terms of the +delta operator. the proposed multi-agent system can unify discrete-time and continuous-time multi-agent +systems. in a multi-agent network, in practice, the communication among agents is acted upon by various +factors. the communication network among faulty agents may cause link failures, which is modeled +by randomly switching graphs. first, we show that the delta representation of discrete-time multi-agent +system reaches consensus in mean (in probability and almost surely) if the expected graph is strongly +connected. the results induce that the continuous-time multi-agent system with random networks can +also reach consensus in the same sense. second, the influence of faulty agents on consensus value is +quantified under original network. by using matrix perturbation theory, the error bound is also presented +in this paper. finally, a simulation example is provided to demonstrate the effectiveness of our theoretical +results. +index terms +consensus, multi-agent systems, delta operator, link failures, error bound.",3 +"abstract +we give sufficient identifiability conditions for estimating mixing proportions in two-component mixtures of skew normal distributions with one known component. we consider the +univariate case as well as two multivariate extensions: a multivariate skew normal distribution (msn) by azzalini and dalla valle (1996) and the canonical fundamental skew normal +distribution (cfusn) by arellano-valle and genton (2005). the characteristic function of +the cfusn distribution is additionally derived.",10 +abstract: the shear strength and stick-slip behavior of a rough rock joint are analyzed using the,5 +"abstract +for σ an orientable surface of finite topological type having genus +at least 3 (possibly closed or possibly with any number of punctures +or boundary components), we show that the mapping class group +m od(σ) has no faithful linear representation in any dimension over +any field of positive characteristic.",4 +"abstract. in this work we propose a heuristic algorithm for the layout optimization for disks installed in +a rotating circular container. this is a unequal circle packing problem with additional balance constraints. +it proved to be an np-hard problem, which justifies heuristics methods for its resolution in larger instances. +the main feature of our heuristic is based on the selection of the next circle to be placed inside the container +according to the position of the system’s center of mass. our approach has been tested on a series of +instances up to 55 circles and compared with the literature. computational results show good performance +in terms of solution quality and computational time for the proposed algorithm. +keywords: packing problem, layout optimization problem, nonidentical circles, heuristic algorithm",5 +"abstract +a rack on [n] can be thought of as a set of maps (fx )x∈[n] , where each +fx is a permutation of [n] such that f(x)fy = fy−1 fx fy for all x and y. in +2013, blackburn showed that the number of isomorphism classes of racks +2 +2 +on [n] is at least 2(1/4−o(1))n and at most 2(c+o(1))n , where c ≈ 1.557; +2 +in this paper we improve the upper bound to 2(1/4+o(1))n , matching the +lower bound. the proof involves considering racks as loopless, edge-coloured +directed multigraphs on [n], where we have an edge of colour y between x +and z if and only if (x)fy = z, and applying various combinatorial tools.",4 +"abstract +this paper discusses minimum distance estimation method in the linear regression +model with dependent errors which are strongly mixing. the regression parameters are +estimated through the minimum distance estimation method, and asymptotic distributional properties of the estimators are discussed. a simulation study compares the +performance of the minimum distance estimator with other well celebrated estimator. +this simulation study shows the superiority of the minimum distance estimator over +another estimator. koulmde (r package) which was used for the simulation study is +available online. see section 4 for the detail.",10 +"abstract. programs that process data that reside in files are widely +used in varied domains, such as banking, healthcare, and web-traffic analysis. precise static analysis of these programs in the context of software +verification and transformation tasks is a challenging problem. our key +insight is that static analysis of file-processing programs can be made +more useful if knowledge of the input file formats of these programs is +made available to the analysis. we propose a generic framework that +is able to perform any given underlying abstract interpretation on the +program, while restricting the attention of the analysis to program paths +that are potentially feasible when the program’s input conforms to the +given file format specification. we describe an implementation of our approach, and present empirical results using real and realistic programs +that show how our approach enables novel verification and transformation tasks, and also improves the precision of standard analysis problems.",6 +"abstract +integer-forcing source coding has been proposed as a low-complexity method for compression of distributed correlated +gaussian sources. in this scheme, each encoder quantizes its observation using the same fine lattice and reduces the result +modulo a coarse lattice. rather than directly recovering the individual quantized signals, the decoder first recovers a full-rank +set of judiciously chosen integer linear combinations of the quantized signals, and then inverts it. it has been observed that the +method works very well for “most” but not all source covariance matrices. the present work quantifies the measure of bad +covariance matrices by studying the probability that integer-forcing source coding fails as a function of the allocated rate, where +the probability is with respect to a random orthonormal transformation that is applied to the sources prior to quantization. for +the important case where the signals to be compressed correspond to the antenna inputs of relays in an i.i.d. rayleigh fading +environment, this orthonormal transformation can be viewed as being performed by nature. hence, the results provide performance +guarantees for distributed source coding via integer forcing in this scenario.",7 +"abstract +the classical-input quantum-output (cq) wiretap channel is a communication model involving a classical sender x, a legitimate quantum receiver b, and a quantum eavesdropper e. +the goal of a private communication protocol that uses such a channel is for the sender x to +transmit a message in such a way that the legitimate receiver b can decode it reliably, while the +eavesdropper e learns essentially nothing about which message was transmitted. the ε-oneshot private capacity of a cq wiretap channel is equal to the maximum number of bits that can +be transmitted over the channel, such that the privacy error is no larger than ε ∈ (0, 1). the +present paper provides a lower bound on the ε-one-shot private classical capacity, by exploiting +the recently developed techniques of anshu, devabathini, jain, and warsi, called position-based +coding and convex splitting. the lower bound is equal to a difference of the hypothesis testing +mutual information between x and b and the “alternate” smooth max-information between x +and e. the one-shot lower bound then leads to a non-trivial lower bound on the second-order +coding rate for private classical communication over a memoryless cq wiretap channel.",7 +"abstract—the world is connected through the internet. as the +abundance of internet users connected into the web and the +popularity of cloud computing research, the need of artificial +intelligence (ai) is demanding. in this research, genetic algorithm (ga) as ai optimization method through natural selection +and genetic evolution is utilized. there are many applications of +ga such as web mining, load balancing, routing, and scheduling +or web service selection. hence, it is a challenging task to discover whether the code mainly server side and web based language +technology affects the performance of ga. travelling salesperson problem (tsp) as non polynomial-hard (np-hard) problem +is provided to be a problem domain to be solved by ga. while +many scientists prefer python in ga implementation, another +popular high-level interpreter programming language such as +php (php hypertext preprocessor) and ruby were benchmarked. line of codes, file sizes, and performances based on ga +implementation and runtime were found varies among these programming languages. based on the result, the use of ruby in ga +implementation is recommended. +keywords—tsp; genetic algorithm; web-programming language",6 +"abstract—smartphone applications designed to track human +motion in combination with wearable sensors, e.g., during physical exercising, raised huge attention recently. commonly, they +provide quantitative services, such as personalized training instructions or the counting of distances. but qualitative monitoring +and assessment is still missing, e.g., to detect malpositions, to +prevent injuries, or to optimize training success. +we address this issue by presenting a concept for qualitative +as well as generic assessment of recurrent human motion by +processing multi-dimensional, continuous time series tracked +with motion sensors. therefore, our segmentation procedure +extracts individual events of specific length and we propose +expressive features to accomplish a qualitative motion assessment +by supervised classification. we verified our approach within +a comprehensive study encompassing 27 athletes undertaking +different body weight exercises. we are able to recognize six +different exercise types with a success rate of 100% and to assess +them qualitatively with an average success rate of 99.3%. +keywords—motion assessment; activity recognition; physical +exercises; segmentation",1 +"abstractions for high-performance remote data access, +mechanisms for scalable data replication, cataloging with rich semantic and syntactic information, data discovery, distributed monitoring, and web-based portals for using the system. +keywords—climate modeling, data management, earth system +grid (esg), grid computing.",5 +"abstract—modern dense flash memory devices operate at +very low error rates, which require powerful error correcting +coding (ecc) techniques. an emerging class of graph-based ecc +techniques that has broad applications is the class of spatiallycoupled (sc) codes, where a block code is partitioned into +components that are then rewired multiple times to construct +an sc code. here, our focus is on sc codes with the underlying +circulant-based structure. in this paper, we present a three-stage +approach for the design of high performance non-binary sc (nbsc) codes optimized for practical flash channels; we aim at +minimizing the number of detrimental general absorbing sets of +type two (gasts) in the graph of the designed nb-sc code. in +the first stage, we deploy a novel partitioning mechanism, called +the optimal overlap partitioning, which acts on the protograph of +the sc code to produce optimal partitioning corresponding to +the smallest number of detrimental objects. in the second stage, +we apply a new circulant power optimizer to further reduce the +number of detrimental gasts. in the third stage, we use the +weight consistency matrix framework to manipulate edge weights +to eliminate as many as possible of the gasts that remain in +the nb-sc code after the first two stages (that operate on the +unlabeled graph of the code). simulation results reveal that nbsc codes designed using our approach outperform state-of-theart nb-sc codes when used over flash channels.",7 +"abstract. let g be a finite group and, for a prime p, let s be a sylow p-subgroup of g. a +character χ of g is called sylp -regular if the restriction of χ to s is the character of the regular +representation of s. if, in addition, χ vanishes at all elements of order divisible by p, χ is said +to be steinberg-like. for every finite simple group g we determine all primes p for which g +admits a steinberg-like character, except for alternating groups in characteristic 2. moreover, +we determine all primes for which g has a projective f g-module of dimension |s|, where f is +an algebraically closed field of characteristic p.",4 +"abstract +we present bounded dynamic (but observer-free) output feedback laws +that achieve global stabilization of equilibrium profiles of the partial +differential equation (pde) model of a simplified, age-structured +chemostat model. the chemostat pde state is positive-valued, which +means that our global stabilization is established in the positive orthant of +a particular function space—a rather non-standard situation, for which we +develop non-standard tools. our feedback laws do not employ any of the +(distributed) parametric knowledge of the model. moreover, we provide a +family of highly unconventional control lyapunov functionals (clfs) for +the age-structured chemostat pde model. two kinds of feedback +stabilizers are provided: stabilizers with continuously adjusted input and +sampled-data stabilizers. the results are based on the transformation of the +first-order hyperbolic partial differential equation to an ordinary +differential equation (one-dimensional) and an integral delay equation +(infinite-dimensional). novel stability results for integral delay equations +are also provided; the results are of independent interest and allow the +explicit construction of the clf for the age-structured chemostat model.",3 +"abstract +this paper studies a pursuit-evasion problem involving a single pursuer and a single evader, where we are interested in developing +a pursuit strategy that doesn’t require continuous, or even periodic, information about the position of the evader. we propose +a self-triggered control strategy that allows the pursuer to sample the evader’s position autonomously, while satisfying desired +performance metric of evader capture. the work in this paper builds on the previously proposed self-triggered pursuit strategy +which guarantees capture of the evader in finite time with a finite number of evader samples. however, this algorithm relied +on the unrealistic assumption that the evader’s exact position was available to the pursuer. instead, we extend our previous +framework to develop an algorithm which allows for uncertainties in sampling the information about the evader, and derive +tolerable upper-bounds on the error such that the pursuer can guarantee capture of the evader. in addition, we outline the +advantages of retaining the evader’s history in improving the current estimate of the true location of the evader that can be +used to capture the evader with even less samples. our approach is in sharp contrast to the existing works in literature and +our results ensure capture without sacrificing any performance in terms of guaranteed time-to-capture, as compared to classic +algorithms that assume continuous availability of information. +key words: pursuit-evasion; self-triggered control; sampled-data control; set-valued analysis",3 +"abstract. in this article we explore an algorithm for diffeomorphic +random sampling of nonuniform probability distributions on riemannian manifolds. the algorithm is based on optimal information transport +(oit)—an analogue of optimal mass transport (omt). our framework +uses the deep geometric connections between the fisher-rao metric on +the space of probability densities and the right-invariant information +metric on the group of diffeomorphisms. the resulting sampling algorithm is a promising alternative to omt, in particular as our formulation is semi-explicit, free of the nonlinear monge–ampere equation. +compared to markov chain monte carlo methods, we expect our algorithm to stand up well when a large number of samples from a low +dimensional nonuniform distribution is needed. +keywords: density matching, information geometry, fisher–rao metric, optimal transport, image registration, diffeomorphism groups, random sampling +msc2010: 58e50, 49q10, 58e10",10 +"abstract +this paper presents the development of an adaptive algebraic multiscale solver for +compressible flow (c-ams) in heterogeneous porous media. similar to the recently +developed ams for incompressible (linear) flows [wang et al., jcp, 2014], c-ams +operates by defining primal and dual-coarse blocks on top of the fine-scale grid. +these coarse grids facilitate the construction of a conservative (finite volume) coarsescale system and the computation of local basis functions, respectively. however, +unlike the incompressible (elliptic) case, the choice of equations to solve for basis +functions in compressible problems is not trivial. therefore, several basis function +formulations (incompressible and compressible, with and without accumulation) +are considered in order to construct an efficient multiscale prolongation operator. +as for the restriction operator, c-ams allows for both multiscale finite volume +(msfv) and finite element (msfe) methods. finally, in order to resolve highfrequency errors, fine-scale (pre- and post-) smoother stages are employed. in order +to reduce computational expense, the c-ams operators (prolongation, restriction, +and smoothers) are updated adaptively. in addition to this, the linear system in the +newton-raphson loop is infrequently updated. systematic numerical experiments +are performed to determine the effect of the various options, outlined above, on the +c-ams convergence behaviour. an efficient c-ams strategy for heterogeneous 3d +compressible problems is developed based on overall cpu times. finally, c-ams is +compared against an industrial-grade algebraic multigrid (amg) solver. results +of this comparison illustrate that the c-ams is quite efficient as a nonlinear solver, +even when iterated to machine accuracy. +key words: multiscale methods, compressible flows, heterogeneous porous media, +scalable linear solvers, multiscale finite volume method, multiscale finite element +method, iterative multiscale methods, algebraic multiscale methods.",5 +"abstract +numerical simulation of compressible fluid flows is performed using the euler equations. they include the scalar advection equation for the density, the +vector advection equation for the velocity and a given pressure dependence on +the density. an approximate solution of an initial–boundary value problem is +calculated using the finite element approximation in space. the fully implicit +two–level scheme is used for discretization in time. numerical implementation +is based on newton’s method. the main attention is paid to fulfilling conservation laws for the mass and total mechanical energy for the discrete formulation. +two–level schemes of splitting by physical processes are employed for numerical +solving problems of barotropic fluid flows. for a transition from one time level +to the next one, an iterative process is used, where at each iteration the linearized scheme is implemented via solving individual problems for the density +and velocity. possibilities of the proposed schemes are illustrated by numerical +results for a two–dimensional model problem with density perturbations. +keywords: compressible fluids, the euler system, barotropic fluid, finite +element method, conservation laws, two–level schemes, decoupling scheme",5 +"abstract. formal concept analysis and its associated conceptual structures have been used to support exploratory search through conceptual +navigation. relational concept analysis (rca) is an extension of formal concept analysis to process relational datasets. rca and its multiple interconnected structures represent good candidates to support exploratory search in relational datasets, as they are enabling navigation +within a structure as well as between the connected structures. however, +building the entire structures does not present an efficient solution to +explore a small localised area of the dataset, for instance to retrieve the +closest alternatives to a given query. in these cases, generating only a +concept and its neighbour concepts at each navigation step appears as +a less costly alternative. in this paper, we propose an algorithm to compute a concept and its neighbourhood in extended concept lattices. the +concepts are generated directly from the relational context family, and +possess both formal and relational attributes. the algorithm takes into +account two rca scaling operators. we illustrate it on an example. +keywords: relational concept analysis, formal concept analysis, ondemand generation",2 +"abstract +a research frontier has emerged in scientific computation, wherein discretisation error +is regarded as a source of epistemic uncertainty that can be modelled. this raises several +statistical challenges, including the design of statistical methods that enable the coherent +propagation of probabilities through a (possibly deterministic) computational work-flow, +in order to assess the impact of discretisation error on the computer output. this paper +examines the case for probabilistic numerical methods in routine statistical computation. +our focus is on numerical integration, where a probabilistic integrator is equipped with a full +distribution over its output that reflects the fact that the integrand has been discretised. our +main technical contribution is to establish, for the first time, rates of posterior contraction for +one such method. several substantial applications are provided for illustration and critical +evaluation, including examples from statistical modelling, computer graphics and a computer +model for an oil reservoir.",10 +"abstract and +talk (mcdonnell et al., 2017). +10 +imagenet32 4x +imagenet32 10x +imagenet32 15x +cifar-100 4x +cifar-100 10x +cifar-10 +svhn +mnist +imagenet single crop +imagenet multi-crop +bwn on imagenet",9 +"abstract +today’s javascript applications are composed of scripts from different origins that are loaded at +run time. as not all of these origins are equally trusted, the execution of these scripts should +be isolated from one another. however, some scripts must access the application state and some +may be allowed to change it, while preserving the confidentiality and integrity constraints of the +application. +this paper presents design and implementation of decentjs, a language-embedded sandbox +for full javascript. it enables scripts to run in a configurable degree of isolation with fine-grained +access control. it provides a transactional scope in which effects are logged for review by the +access control policy. after inspection of the log, effects can be committed to the application +state or rolled back. +the implementation relies on javascript proxies to guarantee full interposition for the full +language and for all code, including dynamically loaded scripts and code injected via eval. its +only restriction is that scripts must be compliant with javascript’s strict mode. +1998 acm subject classification d.4.6 security and protection +keywords and phrases javascript, sandbox, proxy",6 +"abstract +let n (n) denote the number of isomorphism types of groups of order n. we +consider the integers n that are products of at most 4 not necessarily distinct primes +and exhibit formulas for n (n) for such n.",4 +"abstract +we consider the action of an irreducible outer automorphism φ on the +closure of culler–vogtmann outer space. this action has north-south +dynamics and so, under iteration, points converge exponentially to [t+φ ]. +for each n ≥ 3, we give a family of outer automorphisms φk ∈ +out(fn ) such that as, k goes to infinity, the rate of convergence of φk +goes to infinity while the rate of convergence of φ−1 +goes to one. even +k +if we only require the rate of convergence of φk to remain bounded away +from one, no such family can be constructed when n < 3. +this family also provides an explicit example of a property described +by handel and mosher: that there is no uniform upper bound on the +distance between the axes of an automorphism and its inverse.",4 +"abstract—distribution grid is the medium and low voltage part +of a large power system. structurally, the majority of distribution +networks operate radially, such that energized lines form a +collection of trees, i.e. forest, with a substation being at the root of +any tree. the operational topology/forest may change from time to +time, however tracking these changes, even though important for +the distribution grid operation and control, is hindered by limited +real-time monitoring. this paper develops a learning framework +to reconstruct radial operational structure of the distribution +grid from synchronized voltage measurements in the grid subject +to the exogenous fluctuations in nodal power consumption. to +detect operational lines our learning algorithm uses conditional +independence tests for continuous random variables that is +applicable to a wide class of probability distributions of the nodal +consumption and gaussian injections in particular. moreover, our +algorithm applies to the practical case of unbalanced three-phase +power flow. algorithm performance is validated on ac power flow +simulations over ieee distribution grid test cases. +keywords—distribution networks, power flow, unbalanced threephase, graphical models, conditional independence, computational +complexity",3 +"abstract. let k be a discretly henselian field whose residue field is separably +closed. answering a question raised by g. prasad, we show that a semisimple k– +group g is quasi-split if and only if it quasi–splits after a finite tamely ramified +extension of k.",4 +"abstract +the idea that there are any large-scale trends in the evolution of biological organisms is +highly controversial. it is commonly believed, for example, that there is a large-scale trend in +evolution towards increasing complexity, but empirical and theoretical arguments undermine +this belief. natural selection results in organisms that are well adapted to their local environments, but it is not clear how local adaptation can produce a global trend. in this paper, i +present a simple computational model, in which local adaptation to a randomly changing +environment results in a global trend towards increasing evolutionary versatility. in this +model, for evolutionary versatility to increase without bound, the environment must be +highly dynamic. the model also shows that unbounded evolutionary versatility implies an +accelerating evolutionary pace. i believe that unbounded increase in evolutionary versatility +is a large-scale trend in evolution. i discuss some of the testable predictions about organismal +evolution that are suggested by the model.",5 +"abstract—machine learning models are frequently used to +solve complex security problems, as well as to make decisions in sensitive situations like guiding autonomous vehicles +or predicting financial market behaviors. previous efforts have +shown that numerous machine learning models were vulnerable +to adversarial manipulations of their inputs taking the form +of adversarial samples. such inputs are crafted by adding +carefully selected perturbations to legitimate inputs so as to +force the machine learning model to misbehave, for instance +by outputting a wrong class if the machine learning task of +interest is classification. in fact, to the best of our knowledge, +all previous work on adversarial samples crafting for neural +network considered models used to solve classification tasks, +most frequently in computer vision applications. in this paper, +we contribute to the field of adversarial machine learning by +investigating adversarial input sequences for recurrent neural +networks processing sequential data. we show that the classes +of algorithms introduced previously to craft adversarial samples +misclassified by feed-forward neural networks can be adapted +to recurrent neural networks. in a experiment, we show that +adversaries can craft adversarial sequences misleading both +categorical and sequential recurrent neural networks.",9 +"abstract. we give a characterization for asymptotic dimension growth. we +apply it to cat(0) cube complexes of finite dimension, giving an alternative proof +of n. wright’s result on their finite asymptotic dimension. we also apply our new +characterization to geodesic coarse median spaces of finite rank and establish +that they have subexponential asymptotic dimension growth. this strengthens a +recent result of j. s̆pakula and n. wright.",4 +"abstract +we consider the problem of density estimation on riemannian manifolds. density +estimation on manifolds has many applications in fluid-mechanics, optics and +plasma physics and it appears often when dealing with angular variables (such as +used in protein folding, robot limbs, gene-expression) and in general directional +statistics. in spite of the multitude of algorithms available for density estimation in +the euclidean spaces rn that scale to large n (e.g. normalizing flows, kernel methods and variational approximations), most of these methods are not immediately +suitable for density estimation in more general riemannian manifolds. we revisit +techniques related to homeomorphisms from differential geometry for projecting +densities to sub-manifolds and use it to generalize the idea of normalizing flows to +more general riemannian manifolds. the resulting algorithm is scalable, simple +to implement and suitable for use with automatic differentiation. we demonstrate +concrete examples of this method on the n-sphere sn . +in recent years, there has been much interest in applying variational inference techniques to learning +large scale probabilistic models in various domains, such as images and text [1, 2, 3, 4, 5, 6]. +one of the main issues in variational inference is finding the best approximation to an intractable +posterior distribution of interest by searching through a class of known probability distributions. +the class of approximations used is often limited, e.g., mean-field approximations, implying that +no solution is ever able to resemble the true posterior distribution. this is a widely raised objection +to variational methods, in that unlike mcmc, the true posterior distribution may not be recovered +even in the asymptotic regime. to address this problem, recent work on normalizing flows [7], +inverse autoregressive flows [8], and others [9, 10] (referred collectively as normalizing flows), +focused on developing scalable methods of constructing arbitrarily complex and flexible approximate +posteriors from simple distributions using transformations parameterized by neural networks, which +gives these models universal approximation capability in the asymptotic regime. in all of these works, +the distributions of interest are restricted to be defined over high dimensional euclidean spaces. +there are many other distributions defined over special homeomorphisms of euclidean spaces that are +of interest in statistics, such as beta and dirichlet (n-simplex); norm-truncated gaussian (n-ball); +wrapped cauchy and von-misses fisher (n-sphere), which find little applicability in variational +inference with large scale probabilistic models due to the limitations related to density complexity +and gradient computation [11, 12, 13, 14]. many such distributions are unimodal and generating +complicated distributions from them would require creating mixture densities or using auxiliary +random variables. mixture methods require further knowledge or tuning, e.g. number of mixture +components necessary, and a heavy computational burden on the gradient computation in general, +e.g. with quantile functions [15]. further, mode complexity increases only linearly with mixtures as +opposed to exponential increase with normalizing flows. conditioning on auxiliary variables [16] on +the other hand constrains the use of the created distribution, due to the need for integrating out the +auxiliary factors in certain scenarios. in all of these methods, computation of low-variance gradients +is difficult due to the fact that simulation of random variables cannot be in general reparameterized +(e.g. rejection sampling [17]). in this work, we present methods that generalizes previous work on +improving variational inference in rn using normalizing flows to riemannian manifolds of interest +such as spheres sn , tori tn and their product topologies with rn , like infinite cylinders.",10 +"abstract +we show that memcapacitive (memory capacitive) systems can be used as synapses in artificial neural +networks. as an example of our approach, we discuss the architecture of an integrate-and-fire neural +network based on memcapacitive synapses. moreover, we demonstrate that the spike-timing-dependent +plasticity can be simply realized with some of these devices. memcapacitive synapses are a low-energy +alternative to memristive synapses for neuromorphic computation.",9 +"abstract +let g be an n-node simple directed planar graph with nonnegative edge weights. we +study the fundamental problems of computing (1) a global cut of g with minimum weight +and (2) a cycle of g with minimum weight. the best previously known algorithm for the +former problem, running in o(n log3 n) time, can be obtained from the algorithm of łacki, +˛ +nussbaum, sankowski, and wulff-nilsen for single-source all-sinks maximum flows. the +best previously known result for the latter problem is the o(n log3 n)-time algorithm of +wulff-nilsen. by exploiting duality between the two problems in planar graphs, we solve +both problems in o(n log n log log n) time via a divide-and-conquer algorithm that finds a +shortest non-degenerate cycle. the kernel of our result is an o(n log log n)-time algorithm +for computing noncrossing shortest paths among nodes well ordered on a common face +of a directed plane graph, which is extended from the algorithm of italiano, nussbaum, +sankowski, and wulff-nilsen for an undirected plane graph.",8 +"abstract +a shortest-path algorithm finds a path containing the minimal cost between two vertices in a +graph. a plethora of shortest-path algorithms is studied in the literature that span across multiple +disciplines. this paper presents a survey of shortest-path algorithms based on a taxonomy that is +introduced in the paper. one dimension of this taxonomy is the various flavors of the shortest-path +problem. there is no one general algorithm that is capable of solving all variants of the shortest-path +problem due to the space and time complexities associated with each algorithm. other important +dimensions of the taxonomy include whether the shortest-path algorithm operates over a static or +a dynamic graph, whether the shortest-path algorithm produces exact or approximate answers, and +whether the objective of the shortest-path algorithm is to achieve time-dependence or is to only be +goal directed. this survey studies and classifies shortest-path algorithms according to the proposed +taxonomy. the survey also presents the challenges and proposed solutions associated with each +category in the taxonomy.",8 +"abstract— we present an event-triggered control strategy +for stabilizing a scalar, continuous-time, time-invariant, linear +system over a digital communication channel having bounded +delay, and in the presence of bounded system disturbance. we +propose an encoding-decoding scheme, and determine lower +bounds on the packet size and on the information transmission +rate which are sufficient for stabilization. we show that for +small values of the delay, the timing information implicit in +the triggering events is enough to stabilize the system with any +positive rate. in contrast, when the delay increases beyond a +critical threshold, the timing information alone is not enough to +stabilize the system and the transmission rate begins to increase. +finally, large values of the delay require transmission rates +higher than what prescribed by the classic data-rate theorem. +the results are numerically validated using a linearized model +of an inverted pendulum. +index terms— control under communication constraints, +event-triggered control, quantized control",3 +"abstract. we define tate-betti and tate-bass invariants for modules over a commutative noetherian local ring r. then we show +the periodicity of these invariants provided that r is a hypersurface. in case r is also gorenstein, we see that a finitely generated +r-module m and its matlis dual have the same tate-betti and +tate-bass numbers.",0 +"abstract—this paper studies the performance of sparse regression codes for lossy compression with the squared-error distortion +criterion. in a sparse regression code, codewords are linear +combinations of subsets of columns of a design matrix. it is shown +that with minimum-distance encoding, sparse regression codes +achieve the shannon rate-distortion function for i.i.d. gaussian +sources r∗ (d) as well as the optimal excess-distortion exponent. +this completes a previous result which showed that r∗ (d) and +the optimal exponent were achievable for distortions below a +certain threshold. the proof of the rate-distortion result is based +on the second moment method, a popular technique to show +that a non-negative random variable x is strictly positive with +high probability. in our context, x is the number of codewords +within target distortion d of the source sequence. we first identify +the reason behind the failure of the standard second moment +method for certain distortions, and illustrate the different failure +modes via a stylized example. we then use a refinement of the +second moment method to show that r∗ (d) is achievable for +all distortion values. finally, the refinement technique is applied +to suen’s correlation inequality to prove the achievability of the +optimal gaussian excess-distortion exponent. +index terms—lossy compression, sparse superposition codes, +rate-distortion function, gaussian source, error exponent, second +moment method, large deviations",7 +"abstract. motivated by the common academic problem of allocating papers to +referees for conference reviewing we propose a novel mechanism for solving the +assignment problem when we have a two sided matching problem with preferences from one side (the agents/reviewers) over the other side (the objects/papers) +and both sides have capacity constraints. the assignment problem is a fundamental problem in both computer science and economics with application in many +areas including task and resource allocation. we draw inspiration from multicriteria decision making and voting and use order weighted averages (owas) +to propose a novel and flexible class of algorithms for the assignment problem. +we show an algorithm for finding an σ -owa assignment in polynomial time, in +contrast to the np-hardness of finding an egalitarian assignment. inspired by this +setting we observe an interesting connection between our model and the classic +proportional multi-winner election problem in social choice.",2 +"abstract +we propose a new estimator of a discrete monotone probability +mass function with known flat regions. we analyse its asymptotic +properties and compare its performance to the grenander estimator +and to the monotone rearrangement estimator.",10 +"abstract— we consider the problem of dense depth prediction +from a sparse set of depth measurements and a single rgb +image. since depth estimation from monocular images alone is +inherently ambiguous and unreliable, to attain a higher level of +robustness and accuracy, we introduce additional sparse depth +samples, which are either acquired with a low-resolution depth +sensor or computed via visual simultaneous localization and +mapping (slam) algorithms. we propose the use of a single +deep regression network to learn directly from the rgb-d raw +data, and explore the impact of number of depth samples on +prediction accuracy. our experiments show that, compared to +using only rgb images, the addition of 100 spatially random +depth samples reduces the prediction root-mean-square error +by 50% on the nyu-depth-v2 indoor dataset. it also boosts +the percentage of reliable prediction from 59% to 92% on +the kitti dataset. we demonstrate two applications of the +proposed algorithm: a plug-in module in slam to convert +sparse maps to dense maps, and super-resolution for lidars. +software2 and video demonstration3 are publicly available.",2 +"abstract +estimating the entropy based on data is one of the prototypical problems in distribution +property testing and estimation. for estimating the shannon entropy of a distribution on s +elements with independent samples, [pan04] showed that the sample complexity is sublinear in +s, and [vv11a] showed that consistent estimation of shannon entropy is possible if and only +if the sample size n far exceeds logs s . in this paper we consider the problem of estimating the +entropy rate of a stationary reversible markov chain with s states from a sample path of n +observations. we show that +(a) as long as the markov chain mixes not too slowly, i.e., the relaxation time is +s2 +at most o( lns3 s ), consistent estimation is achievable when n ≫ log +s. +(b) as long as the markov chain has some slight dependency, i.e., the relaxation +2 +s2 +time is at least 1 + ω( ln√ss ), consistent estimation is impossible when n . log +s. +2",10 +"abstract +the paper investigates the computational problem of predicting rna secondary +structures. the general belief is that allowing pseudoknots makes the problem hard. +existing polynomial-time algorithms are heuristic algorithms with no performance guarantee and can only handle limited types of pseudoknots. in this paper we initiate the +study of predicting rna secondary structures with a maximum number of stacking +pairs while allowing arbitrary pseudoknots. we obtain two approximation algorithms +with worst-case approximation ratios of 1/2 and 1/3 for planar and general secondary +structures, respectively. for an rna sequence of n bases, the approximation algorithm +for planar secondary structures runs in o(n3 ) time while that for the general case runs +in linear time. furthermore, we prove that allowing pseudoknots makes it np-hard to +maximize the number of stacking pairs in a planar secondary structure. this result is in +contrast with the recent np-hard results on psuedoknots which are based on optimizing +some general and complicated energy functions.",5 +"abstract +in this paper, we further develop the approach, originating in [26], to “computation-friendly” +statistical estimation via convex programming.our focus is on estimating a linear or quadratic +form of an unknown “signal,” known to belong to a given convex compact set, via noisy indirect +observations of the signal. classical theoretical results on the subject deal with precisely stated +statistical models and aim at designing statistical inferences and quantifying their performance in +a closed analytic form. in contrast to this traditional (highly instructive) descriptive framework, +the approach we promote here can be qualified as operational – the estimation routines and their +risks are not available “in a closed form,” but are yielded by an efficient computation. all we +know in advance is that under favorable circumstances the risk of the resulting estimate, whether +high or low, is provably near-optimal under the circumstances. as a compensation for the lack of +“explanatory power,” this approach is applicable to a much wider family of observation schemes +than those where “closed form descriptive analysis” is possible. +we discuss applications of this approach to classical problems of estimating linear forms of parameters of sub-gaussian distribution and quadratic forms of partameters of gaussian and discrete +distributions. the performance of the constructed estimates is illustrated by computation experiments in which we compare the risks of the constructed estimates with (numerical) lower bounds +for corresponding minimax risks for randomly sampled estimation problems.",10 +"abstract +we study nonconvex finite-sum problems and analyze stochastic variance reduced gradient +(svrg) methods for them. svrg and related methods have recently surged into prominence for +convex optimization given their edge over stochastic gradient descent (sgd); but their theoretical analysis almost exclusively assumes convexity. in contrast, we prove non-asymptotic rates +of convergence (to stationary points) of svrg for nonconvex optimization, and show that it is +provably faster than sgd and gradient descent. we also analyze a subclass of nonconvex problems on which svrg attains linear convergence to the global optimum. we extend our analysis +to mini-batch variants of svrg, showing (theoretical) linear speedup due to mini-batching in +parallel settings.",9 +"abstract +a template-based generic programming approach was presented in a previous paper [19] +that separates the development effort of programming a physical model from that of computing +additional quantities, such as derivatives, needed for embedded analysis algorithms. in this +paper, we describe the implementation details for using the template-based generic programming +approach for simulation and analysis of partial differential equations (pdes). we detail several +of the hurdles that we have encountered, and some of the software infrastructure developed +to overcome them. we end with a demonstration where we present shape optimization and +uncertainty quantification results for a 3d pde application.",5 +"abstract +blomer and naewe [bn09] modified the randomized sieving algorithm of ajtai, kumar and +sivakumar [aks01] to solve the shortest vector problem (svp). the algorithm starts with +n = 2o(n) randomly chosen vectors in the lattice and employs a sieving procedure to iteratively +obtain shorter vectors in the lattice. the running time of the sieving procedure is quadratic in +n. +we study this problem for the special but important case of the ℓ∞ norm. we give a new +sieving procedure that runs in time linear in n , thereby significantly improving the running time +of the algorithm for svp in the ℓ∞ norm. as in [aks02, bn09], we also extend this algorithm +to obtain significantly faster algorithms for approximate versions of the shortest vector problem +and the closest vector problem (cvp) in the ℓ∞ norm. +we also show that the heuristic sieving algorithms of nguyen and vidick [nv08] and wang +et.al. [wltb11] can also be analyzed in the ℓ∞ norm. the main technical contribution in +this part is to calculate the expected volume of intersection of a unit ball centred at origin and +another ball of a different radius centred at a uniformly random point on the boundary of the +unit ball. this might be of independent interest.",8 +"abstract. we present a method for computing the table of marks of a direct product +of finite groups. in contrast to the character table of a direct product of two finite groups, +its table of marks is not simply the kronecker product of the tables of marks of the two +groups. based on a decomposition of the inclusion order on the subgroup lattice of a +direct product as a relation product of three smaller partial orders, we describe the table +of marks of the direct product essentially as a matrix product of three class incidence +matrices. each of these matrices is in turn described as a sparse block diagonal matrix. +as an application, we use a variant of this matrix product to construct a ghost ring +and a mark homomorphism for the rational double burnside algebra of the symmetric +group s3 .",4 +"abstract +if x, y, z denote sets of random variables, two different data sources may contain +samples from px,y and py,z , respectively. we argue that causal inference can help +inferring properties of the ‘unobserved joint distributions’ px,y,z or px,z . the properties +may be conditional independences (as in ‘integrative causal inference’) or also quantitative +statements about dependences. +more generally, we define a learning scenario where the input is a subset of variables +and the label is some statistical property of that subset. sets of jointly observed variables +define the training points, while unobserved sets are possible test points. to solve this +learning task, we infer, as an intermediate step, a causal model from the observations that +then entails properties of unobserved sets. accordingly, we can define the vc dimension +of a class of causal models and derive generalization bounds for the predictions. +here, causal inference becomes more modest and better accessible to empirical tests +than usual: rather than trying to find a causal hypothesis that is ‘true’ (which is a problematic term when it is unclear how to define interventions) a causal hypothesis is useful +whenever it correctly predicts statistical properties of unobserved joint distributions. +within such a ‘pragmatic’ application of causal inference, some popular heuristic approaches become justified in retrospect. it is, for instance, allowed to infer dags from +partial correlations instead of conditional independences if the dags are only used to +predict partial correlations. +i hypothesize that our pragmatic view on causality may even cover the usual meaning in terms of interventions and sketch why predicting the impact of interventions can +sometimes also be phrased as a task of the above type.",10 +"abstract +the constrained lcs problem asks one to find a longest common subsequence of two input strings +a and b with some constraints. the str-ic-lcs problem is a variant of the constrained lcs +problem, where the solution must include a given constraint string c as a substring. given two strings +a and b of respective lengths m and n , and a constraint string c of length at most min{m, n }, +the best known algorithm for the str-ic-lcs problem, proposed by deorowicz (inf. process. lett., +11:423–426, 2012), runs in o(m n ) time. in this work, we present an o(mn + nm )-time solution to +the str-ic-lcs problem, where m and n denote the sizes of the run-length encodings of a and b, +respectively. since m ≤ m and n ≤ n always hold, our algorithm is always as fast as deorowicz’s +algorithm, and is faster when input strings are compressible via rle.",8 +"abstract +let (r, m) be a noetherian local ring, e the injective hull of k = r/m and m ◦ = +homr (m, e) the matlis dual of the r-module m. if the canonical monomorphism ϕ : +m → m ◦◦ is surjective, m is known to be called (matlis-)reflexive. with the help of the +bass numbers µ(p, m) = dimκ(p) (homr (r/p, m)p ) of m with respect to p we show: m +is reflexive if and only if µ(p, m) = µ(p, m ◦◦ ) for all p ∈ spec(r). from this it follows +for every r-module m: if there exists a monomorphism m ◦◦ ֒→ m or an epimorphism +m ։ m ◦◦ , then m is already reflexive. +key words: matlis-reflexive modules, bass numbers, associated prime ideals, torsion modules, cotorsion modules. +mathematics subject classification (2010): 13b35, 13c11, 13e10.",0 +"abstract +the ability to use inexpensive, noninvasive sensors to accurately classify flying insects would have significant +implications for entomological research, and allow for the development of many useful applications in vector +control for both medical and agricultural entomology. given this, the last sixty years have seen many research +efforts on this task. to date, however, none of this research has had a lasting impact. in this work, we explain +this lack of progress. we attribute the stagnation on this problem to several factors, including the use of acoustic +sensing devices, the overreliance on the single feature of wingbeat frequency, and the attempts to learn complex +models with relatively little data. in contrast, we show that pseudo-acoustic optical sensors can produce vastly +superior data, that we can exploit additional features, both intrinsic and extrinsic to the insect’s flight behavior, +and that a bayesian classification approach allows us to efficiently learn classification models that are very +robust to overfitting. we demonstrate our findings with large scale experiments that dwarf all previous works +combined, as measured by the number of insects and the number of species considered.",5 +abstract,8 +"abstract. in the context of the genome-wide association studies (gwas), one has to solve +long sequences of generalized least-squares problems; such a task has two limiting factors: +execution time –often in the range of days or weeks– and data management –data sets in +the order of terabytes. we present an algorithm that obviates both issues. by pipelining +the computation, and thanks to a sophisticated transfer strategy, we stream data from hard +disk to main memory to gpus and achieve sustained peak performance; with respect to a +highly-optimized cpu implementation, our algorithm shows a speedup of 2.6x. moreover, the +approach lends itself to multiple gpus and attains almost perfect scalability. when using 4 +gpus, we observe speedups of 9x over the aforementioned implementation, and 488x over a +widespread biology library. +keywords: gwas, generalized least-squares, computational biology, out-of-core computation, high-performance, multiple gpus, data transfer, multibuffering, streaming, big data",5 +"abstract. finite rank median spaces are a simultaneous generalisation +of finite dimensional cat(0) cube complexes and real trees. if γ is an +irreducible lattice in a product of rank one simple lie groups, we show +that every action of γ on a complete, finite rank median space has +a global fixed point. this is in sharp contrast with the behaviour of +actions on infinite rank median spaces. +the fixed point property is obtained as corollary to a superrigidity +result; the latter holds for irreducible lattices in arbitrary products of +compactly generated groups. +we exploit roller compactifications of median spaces; these were introduced in [fio17a] and generalise a well-known construction in the +case of cube complexes. we provide a reduced 1-cohomology class that +detects group actions with a finite orbit in the roller compactification. +even for cat(0) cube complexes, only second bounded cohomology +classes were known with this property, due to [cfi16]. as a corollary, +we observe that, in gromov’s density model, random groups at low density do not have shalom’s property hf d .",4 +"abstract—in this paper, we compare the performance of two +main mimo techniques, beamforming and multiplexing, in the +terahertz (thz) band. the main problem with the thz band is +its huge propagation loss, which is caused by the tremendous +signal attenuation due to molecule absorption of the electromagnetic wave. to overcome the path loss issue, massive mimo +has been suggested to be employed in the network and is expected +to provide tbps for a distance within a few meters. in this +context, beamforming is studied recently as the main technique +to take advantage of mimo in thz and overcome the very +high path loss with the assumption that the thz communication +channel is line-of-sight (los) and there are not significant +multipath rays. on the other hand, recent studies also showed +that the well-known absorbed energy by molecules can be reradiated immediately in the same frequency. such re-radiated +signal is correlated with the main signal and can provide rich +scattering paths for the communication channel. this means that +a significant mimo multiplexing gain can be achieved even in a +los scenario for the thz band. our simulation results reveal a +surprising observation that the mimo multiplexing could be +a better choice than the mimo beamforming under certain +conditions in thz communications.",7 +"abstract. an algorithm to decide the emptiness of a regular type expression with set operators given a set of parameterised type definitions +is presented. the algorithm can also be used to decide the equivalence +of two regular type expressions and the inclusion of one regular type expression in another. the algorithm strictly generalises previous work in +that tuple distributivity is not assumed and set operators are permitted +in type expressions. +keywords: type, emptiness, prescriptive type",6 +"abstract—the allan variance (av) is a widely used quantity +in areas focusing on error measurement as well as in the general +analysis of variance for autocorrelated processes in domains +such as engineering and, more specifically, metrology. the form +of this quantity is widely used to detect noise patterns and +indications of stability within signals. however, the properties of +this quantity are not known for commonly occurring processes +whose covariance structure is non-stationary and, in these cases, +an erroneous interpretation of the av could lead to misleading +conclusions. this paper generalizes the theoretical form of the +av to some non-stationary processes while at the same time +being valid also for weakly stationary processes. some simulation +examples show how this new form can help to understand the +processes for which the av is able to distinguish these from the +stationary cases and hence allow for a better interpretation of +this quantity in applied cases. +index terms—metrology, sensor calibration, bias-instability, +longitudinal studies, haar wavelet variance, heteroscedasticity.",10 +"abstract—domain adaptation algorithms are useful when the +distributions of the training and the test data are different. in +this paper, we focus on the problem of instrumental variation +and time-varying drift in the field of sensors and measurement, +which can be viewed as discrete and continuous distributional +change in the feature space. we propose maximum independence +domain adaptation (mida) and semi-supervised mida (smida) +to address this problem. domain features are first defined to +describe the background information of a sample, such as the +device label and acquisition time. then, mida learns a subspace +which has maximum independence with the domain features, +so as to reduce the inter-domain discrepancy in distributions. a +feature augmentation strategy is also designed to project samples +according to their backgrounds so as to improve the adaptation. +the proposed algorithms are flexible and fast. their effectiveness +is verified by experiments on synthetic datasets and four realworld ones on sensors, measurement, and computer vision. they +can greatly enhance the practicability of sensor systems, as well +as extend the application scope of existing domain adaptation +algorithms by uniformly handling different kinds of distributional +change. +index terms—dimensionality reduction, domain adaptation, +drift correction, hilbert-schmidt independence criterion, machine olfaction, transfer learning",2 +"abstract—low-rank modeling has many important applications in computer vision and machine learning. while the matrix rank is +often approximated by the convex nuclear norm, the use of nonconvex low-rank regularizers has demonstrated better empirical +performance. however, the resulting optimization problem is much more challenging. recent state-of-the-art requires an expensive full +svd in each iteration. in this paper, we show that for many commonly-used nonconvex low-rank regularizers, the singular values +obtained from the proximal operator can be automatically threshold. this allows the proximal operator to be efficiently approximated by +the power method. we then develop a fast proximal algorithm and its accelerated variant with inexact proximal step. a convergence +rate of o(1/t ), where t is the number of iterations, can be guaranteed. furthermore, we show the proposed algorithm can be +parallelized, and the resultant algorithm achieves nearly linear speedup w.r.t. the number of threads. extensive experiments are +performed on matrix completion and robust principal component analysis. significant speedup over the state-of-the-art is observed. +index terms—low-rank matrix learning, nonconvex regularization, proximal algorithm, parallel algorithm, matrix completion, robust +principle component analysis",2 +"abstract +most of the face recognition works focus on specific modules or demonstrate a research idea. this paper presents a pose-invariant 3d-aided 2d face recognition system +(ur2d) that is robust to pose variations as large as 90◦ by leveraging deep learning +technology. the architecture and the interface of ur2d are described, and each module is introduced in detail. extensive experiments are conducted on the uhdb31 and +ijb-a, demonstrating that ur2d outperforms existing 2d face recognition systems +such as vgg-face, facenet, and a commercial off-the-shelf software (cots) by at +least 9% on the uhdb31 dataset and 3% on the ijb-a dataset on average in face identification tasks. ur2d also achieves state-of-the-art performance of 85% on the ijb-a +dataset by comparing the rank-1 accuracy score from template matching. it fills a gap +by providing a 3d-aided 2d face recognition system that has compatible results with +2d face recognition systems using deep learning techniques. +keywords: face recognition, 3d-aided 2d face recognition, deep learning, +pipeline +2010 msc: 00-01, 99-00",1 +"abstract. recentely, anderson and dumitrescu’s s-finiteness has attracted +the interest of several authors. in this paper, we introduce the notions of sfinitely presented modules and then of s-coherent rings which are s-versions of +finitely presented modules and coherent rings, respectively. among other results, we give an s-version of the classical chase’s characterization of coherent +rings. we end the paper with a brief discussion on other s-versions of finitely +presented modules and coherent rings. we prove that these last s-versions can +be characterized in terms of localization. +key words. s-finite, s-finitely presented, s-coherent modules, s-coherence rings. +2010 mathematics subject classification. 13e99.",0 +"abstract +growth in both size and complexity of modern data challenges the applicability +of traditional likelihood-based inference. composite likelihood (cl) methods address +the difficulties related to model selection and computational intractability of the full +likelihood by combining a number of low-dimensional likelihood objects into a single +objective function used for inference. this paper introduces a procedure to combine +partial likelihood objects from a large set of feasible candidates and simultaneously +carry out parameter estimation. the new method constructs estimating equations balancing statistical efficiency and computing cost by minimizing an approximate distance +from the full likelihood score subject to a `1 -norm penalty representing the available +computing resources. this results in truncated cl equations containing only the most +informative partial likelihood score terms. an asymptotic theory within a framework +where both sample size and data dimension grow is developed and finite-sample properties are illustrated through numerical examples.",10 +"abstract in this work, we formulated a real-world problem related to sewerpipeline gas detection using the classification-based approaches. the primary +goal of this work was to identify the hazardousness of sewer-pipeline to offer +safe and non-hazardous access to sewer-pipeline workers so that the human +fatalities, which occurs due to the toxic exposure of sewer gas components, +can be avoided. the dataset acquired through laboratory tests, experiments, +and various literature-sources were organized to design a predictive model +that was able to identify/classify hazardous and non-hazardous situation of +sewer-pipeline. to design such prediction model, several classification algorithms were used and their performances were evaluated and compared, both +empirically and statistically, over the collected dataset. in addition, the performances of several ensemble methods were analyzed to understand the extent +of improvement offered by these methods. the result of this comprehensive +study showed that the instance-based-learning algorithm performed better +than many other algorithms such as multi-layer perceptron, radial basis function network, support vector machine, reduced pruning tree, etc. similarly, it +was observed that multi-scheme ensemble approach enhanced the performance +of base predictors. +v. k. ojha +it4innovations, všb technical university of ostrava, ostrava, czech republic and dept. +of computer science & engineering, jadavpur university, kolkata, india +e-mail: varun.kumar.ojha@vsb.cz +p. dutta +dept. of computer & system sciences, visva-bharati university, india +e-mail: paramartha.dutta@gmail.com +a chaudhuri +dept. of computer science & engineering, jadavpur university, kolkata, india e-mail: +atalc23@gmail.com +neural computing and applications +doi: 10.1007/s00521-016-2443-0",9 +"abstract state machines. this approach has recently been extended to suggest +a formalization of the notion of effective computation over arbitrary countable domains. the central +notions are summarized herein.",6 +"abstract concept, which results in 1169 +physical objects in total. +afterwards, we utilize a cleaned subset of the project gutenberg corpus [11], which contains 3,036 +english books written by 142 authors. an assumption here is that sentences in fictions are more +4",2 +"abstract +in this paper we study the simple semi-lévy driven continuous-time generalized +autoregressive conditionally heteroscedastic (ss-cogarch) process. the statistical properties of this process are characterized. this process has the potential +to approximate any semi-lévy driven cogarch processes. we show that the +state representation of such ss-cogarch process can be described by a random +recurrence equation with periodic random coefficients. the almost sure absolute +convergence of the state process is proved. the periodically stationary solution of +the state process is shown which cause the volatility to be periodically stationary +under some suitable conditions. also it is shown that the increments with constant +length of such ss-cogarch process is itself a periodically correlated (pc) process. +finally, we apply some test to investigate the pc behavior of the increments (with +constant length) of the simulated samples of proposed ss-cogarch process. +keywords: continuous-time garch process; semi-lévy process; periodically +correlated; periodically stationary.",10 +abstract,1 +"abstract +background: the human habitat is a host where microbial species evolve, function, and continue to evolve. +elucidating how microbial communities respond to human habitats is a fundamental and critical task, as +establishing baselines of human microbiome is essential in understanding its role in human disease and health. +recent studies on healthy human microbiome focus on particular body habitats, assuming that microbiome +develop similar structural patterns to perform similar ecosystem function under same environmental conditions. +however, current studies usually overlook a complex and interconnected landscape of human microbiome and +limit the ability in particular body habitats with learning models of specific criterion. therefore, these methods +could not capture the real-world underlying microbial patterns effectively. +results: to obtain a comprehensive view, we propose a novel ensemble clustering framework to mine the +structure of microbial community pattern on large-scale metagenomic data. particularly, we first build a microbial +similarity network via integrating 1920 metagenomic samples from three body habitats of healthy adults. then a +novel symmetric nonnegative matrix factorization (nmf) based ensemble model is proposed and applied onto the +network to detect clustering pattern. extensive experiments are conducted to evaluate the effectiveness of our +model on deriving microbial community with respect to body habitat and host gender. from clustering results, we +observed that body habitat exhibits a strong bound but non-unique microbial structural pattern. meanwhile, +human microbiome reveals different degree of structural variations over body habitat and host gender. +conclusions: in summary, our ensemble clustering framework could efficiently explore integrated clustering results +to accurately identify microbial communities, and provide a comprehensive view for a set of microbial +communities. the clustering results indicate that structure of human microbiome is varied systematically across +body habitats and host genders. such trends depict an integrated biography of microbial communities, which offer +a new insight towards uncovering pathogenic model of human microbiome.",5 +"abstract—regions of nested loops are a common feature of +high performance computing (hpc) codes. in shared memory +programming models, such as openmp, these structure are +the most common source of parallelism. parallelising these +structures requires the programmers to make a static decision +on how parallelism should be applied. however, depending on +the parameters of the problem and the nature of the code, +static decisions on which loop to parallelise may not be optimal, +especially as they do not enable the exploitation of any runtime +characteristics of the execution. changes to the iterations of the +loop which is chosen to be parallelised might limit the amount +of processors that can be utilised. +we have developed a system that allows a code to make a +dynamic choice, at runtime, of what parallelism is applied to +nested loops. the system works using a source to source compiler, +which we have created, to perform transformations to user’s +code automatically, through a directive based approach (similar +to openmp). this approach requires the programmer to specify +how the loops of the region can be parallelised and our runtime +library is then responsible for making the decisions dynamically +during the execution of the code. +our method for providing dynamic decisions on which loop +to parallelise significantly outperforms the standard methods for +achieving this through openmp (using if clauses) and further +optimisations were possible with our system when addressing +simulations where the number of iterations of the loops change +during the runtime of the program or loops are not perfectly +nested.",6 +"abstract +in this paper, we propose a semi-supervised +learning method where we train two neural +networks in a multi-task fashion: a target +network and a confidence network. the target network is optimized to perform a given +task and is trained using a large set of unlabeled data that are weakly annotated. we +propose to weight the gradient updates to +the target network using the scores provided +by the second confidence network, which +is trained on a small amount of supervised +data. thus we avoid that the weight updates +computed from noisy labels harm the quality of the target network model. we evaluate +our learning strategy on two different tasks: +document ranking and sentiment classification. the results demonstrate that our approach not only enhances the performance +compared to the baselines but also speeds +up the learning process from weak labels.",9 +"abstract: +materials design and development typically takes several decades from the initial discovery to +commercialization with the traditional trial and error development approach. with the accumulation of +data from both experimental and computational results, data based machine learning becomes an +emerging field in materials discovery, design and property prediction. this manuscript reviews the +history of materials science as a disciplinary the most common machine learning method used in +materials science, and specifically how they are used in materials discovery, design, synthesis and even +failure detection and analysis after materials are deployed in real application. finally, the limitations of +machine learning for application in materials science and challenges in this emerging field is discussed. +keywords: machine learning, materials discovery and design, materials synthesis, failure detection +1. introduction +materials science has a long history that can date back to the bronze age 1. however, only until the 16th +century, first book on metallurgy was published, marking the beginning of systematic studies in +materials science 2. researches in materials science were purely empirical until theoretical models were +developed. with the advent of computers in the last century, numerical methods to solve theoretical +models became available, ranging from dft (density functional theory) based quantum mechanical +modeling of electronic structure for optoelectronic properties calculation, to continuum based finite +element modeling for mechanical properties 3-4. multiscale modeling that bridge various time and spatial +scales were also developed in the materials science to better simulate the real complex system 5. even +so, it takes several decades from materials discovery to development and commercialization 6-7 . even +though physical modeling can reduce the amount of time by guiding experiment work. the limitation is +also obvious. dft are only used for functional materials optoelectronic property calculation, and that is +only limited to materials without defect 8 . the assumption itself is far off from reality. new concept such +as multiscale modeling is still far away from large scale real industrial application. traditional ways of +materials development are impeding the progress in this field and relevant technological industry. +with the large amount of complex data generated by experiment, especially simulation results from +both published and archived data including materials property value, processing conditions, and +microstructural images, analyzing them all becoming increasingly challenging for researchers. inspired +by the human genome initiative, obama government launched a materials genome initiative hoping to +reduce current materials development time to half 9. with the increase of computing power and the +development of machine learning algorithms, materials informatics has increasingly become another +paradigm in the field. +researchers are already using machine learning method for materials property prediction and discovery. +machine learning forward model are used for materials property prediction after trained on data from +experiments and physical simulations. bhadeshia et al. applied neural network(nn) technique to model +creep property and phase structure in steel 10-11. crystal structure prediction is another area of study for +machine learning thanks to the large amount of structural data in crystallographic database. k -nearest-",5 +abstract,1 +abstract,2 +"abstract: in this paper, an efficient offline hand written character +recognition algorithm is proposed based on associative memory net (amn). +the amn used in this work is basically auto associative. the implementation +is carried out completely in ‘c’ language. to make the system perform to its +best with minimal computation time, a parallel algorithm is also developed +using an api package openmp. characters are mainly english alphabets +(small (26), capital (26)) collected from system (52) and from different +persons (52). the characters collected from system are used to train the amn +and characters collected from different persons are used for testing the +recognition ability of the net. the detailed analysis showed that the network +recognizes the hand written characters with recognition rate of 72.20% in +average case. however, in best case, it recognizes the collected hand written +characters with 88.5%. the developed network consumes 3.57 sec (average) +in serial implementation and 1.16 sec (average) in parallel implementation +using openmp. +keywords: offline; hand written character; associative memory net; +openmp; serial; parallel.",9 +"abstract +in large-scale modern data analysis, first-order optimization methods are +usually favored to obtain sparse estimators in high dimensions. this paper +performs theoretical analysis of a class of iterative thresholding based estimators defined in this way. oracle inequalities are built to show the nearly +minimax rate optimality of such estimators under a new type of regularity +conditions. moreover, the sequence of iterates is found to be able to approach the statistical truth within the best statistical accuracy geometrically +fast. our results also reveal different benefits brought by convex and nonconvex types of shrinkage.",10 +"abstract. for a pair of groups g, h we study pairs of actions g on h and h on +g such that these pairs are compatible and non-abelian tensor products g ⊗ h are +defined.",4 +"abstract. we study a natural problem in graph sparsification, the spanning tree congestion (stc) problem. informally, the stc problem seeks +a spanning tree with no tree-edge routing too many of the original edges. +the root of this problem dates back to at least 30 years ago, motivated +by applications in network design, parallel computing and circuit design. +variants of the problem have also seen algorithmic applications as a +preprocessing step of several important graph algorithms. +for any general connected graph with n vertices and m edges, we show that +√ +its stc is at most o( mn), which is asymptotically optimal since we also +√ +demonstrate graphs with stc at least ω( mn). we present a polynomial√ +time algorithm which computes a spanning tree with congestion o( mn · +log n). we also present another algorithm for computing a spanning tree +√ +with congestion o( mn); this algorithm runs in sub-exponential time +2 +when m = ω(n log n). +for achieving the above results, an important intermediate theorem +is generalized győri-lovász theorem, for which chen et al. [14] gave a +non-constructive proof. we give the first elementary and constructive +proof by providing a local search algorithm with running time o∗ (4n ), +which is a key ingredient of the above-mentioned sub-exponential time +algorithm. we discuss a few consequences of the theorem concerning +graph partitioning, which might be of independent interest. +we also show that for any graph which satisfies certain expanding properties, its stc is at most o(n), and a corresponding spanning tree can be +computed in polynomial time. we then use this to show that a random +graph has stc θ(n) with high probability.",8 +abstract,1 +"abstract—scene text detection is a challenging problem in +computer vision. in this paper, we propose a novel text detection +network based on prevalent object detection frameworks. in +order to obtain stronger semantic feature, we adopt resnet as +feature extraction layers and exploit multi-level feature by +combining hierarchical convolutional networks. a vertical +proposal mechanism is utilized to avoid proposal classification, +while regression layer remains working to improve localization +accuracy. our approach evaluated on icdar2013 dataset +achieves 0.91 f-measure, which outperforms previous state-ofthe-art results in scene text detection. +keywords—scene text detection; deep +ctpn",1 +"abstract +we propose a way to construct fiducial distributions for a multidimensional parameter using a step-by-step conditional procedure related to the inferential importance of the components of the parameter. for discrete models, in which the nonuniqueness of the fiducial distribution is well known, we propose to use the geometric +mean of the “extreme cases” and show its good behavior with respect to the more +traditional arithmetic mean. connections with the generalized fiducial inference approach developed by hannig and with confidence distributions are also analyzed. the +suggested procedure strongly simplifies when the statistical model belongs to a subclass of the natural exponential family, called conditionally reducible, which includes +the multinomial and the negative-multinomial models. furthermore, because fiducial +inference and objective bayesian analysis are both attempts to derive distributions +for an unknown parameter without any prior information, it is natural to discuss +their relationships. in particular, the reference posteriors, which also depend on the +importance ordering of the parameters are the natural terms of comparison. we +show that fiducial and reference posterior distributions coincide in the location-scale +models, and we characterize the conditionally reducible natural exponential families +for which this happens. the discussion of some classical examples closes the paper.",10 +"abstract—developers spend a significant amount of time +searching for code—e.g., to understand how to complete, correct, +or adapt their own code for a new context. unfortunately, the +state of the art in code search has not evolved much beyond text +search over tokenized source. code has much richer structure and +semantics than normal text, and this property can be exploited to +specialize the code-search process for better querying, searching, +and ranking of code-search results. +we present a new code-search engine named source forager. +given a query in the form of a c/c++ function, source forager searches a pre-populated code database for similar c/c++ +functions. source forager preprocesses the database to extract a +variety of simple code features that capture different aspects of +code. a search returns the k functions in the database that are +most similar to the query, based on the various extracted code +features. +we tested the usefulness of source forager using a variety of +code-search queries from two domains. our experiments show +that the ranked results returned by source forager are accurate, +and that query-relevant functions can be reliably retrieved even +when searching through a large code database that contains very +few query-relevant functions. +we believe that source forager is a first step towards muchneeded tools that provide a better code-search experience. +index terms—code search, similar code, program features.",6 +"abstract +machine learning models, including state-of-the-art +deep neural networks, are vulnerable to small perturbations that cause unexpected classification errors. this unexpected lack of robustness raises fundamental questions +about their generalization properties and poses a serious +concern for practical deployments. as such perturbations +can remain imperceptible – the formed adversarial examples demonstrate an inherent inconsistency between vulnerable machine learning models and human perception – +some prior work casts this problem as a security issue. despite the significance of the discovered instabilities and ensuing research, their cause is not well understood and no +effective method has been developed to address the problem. in this paper, we present a novel theory to explain why +this unpleasant phenomenon exists in deep neural networks. +based on that theory, we introduce a simple, efficient, and +effective training approach, batch adjusted network gradients (bang), which significantly improves the robustness of +machine learning models. while the bang technique does +not rely on any form of data augmentation or the utilization +of adversarial images for training, the resultant classifiers +are more resistant to adversarial perturbations while maintaining or even enhancing the overall classification performance.",1 +abstract,9 +"abstract—the increasing penetration of renewable energy in +recent years has led to more uncertainties in power systems. +these uncertainties have to be accommodated by flexible resources (i.e. upward and downward generation reserves). in this +paper, a novel concept, uncertainty marginal price (ump), is +proposed to price both the uncertainty and reserve. at the same +time, the energy is priced at locational marginal price (lmp). a +novel market clearing mechanism is proposed to credit the generation and reserve and to charge the load and uncertainty within +the robust unit commitment (ruc) in the day-ahead market. +we derive the umps and lmps in the robust optimization +framework. ump helps allocate the cost of generation reserves +to uncertainty sources. we prove that the proposed market +clearing mechanism leads to partial market equilibrium. we find +that transmission reserves must be kept explicitly in addition to +generation reserves for uncertainty accommodation. we prove +that transmission reserves for ramping delivery may lead to +financial transmission right (ftr) underfunding in existing +markets. the ftr underfunding can be covered by congestion +fund collected from uncertainty payment in the proposed market +clearing mechanism. simulations on a six-bus system and the +ieee 118-bus system are performed to illustrate the new concepts +and the market clearing mechanism. +index terms—uncertainty marginal price, cost causation, +robust unit commitment, financial transmission right, generation reserve, transmission reserve",3 +"abstract. in 2010, everitt and fountain introduced the concept of reflection monoids. +the boolean reflection monoids form a family of reflection monoids (symmetric inverse +semigroups are boolean reflection monoids of type a). in this paper, we give a family +of presentations of boolean reflection monoids and show how these presentations are +compatible with mutations of certain quivers. a feature of the quivers in this paper +corresponding to presentations of boolean reflection monoids is that the quivers have +frozen vertices. our results recover the presentations of boolean reflection monoids +given by everitt and fountain and the presentations of symmetric inverse semigroups +given by popova. surprisingly, inner by diagram automorphisms of irreducible weyl +groups or boolean reflection monoids can be constructed by sequences of mutations +preserving the same underlying diagrams. as an application, we study the cellularity of semigroup algebras of boolean reflection monoids and construct new cellular +bases of such cellular algebras using presentations we obtained and inner by diagram +automorphisms of boolean reflection monoids. +key words: boolean reflection monoids; presentations; mutations of quivers; inner +by diagram automorphisms; cellular semigroups; cellular basis +2010 mathematics subject classification: 13f60; 20m18; 16g20; 20f55; 51f15",4 +"abstract. in this paper we introduce and study the conjugacy ratio of a +finitely generated group, which is the limit at infinity of the quotient of the +conjugacy and standard growth functions. we conjecture that the conjugacy +ratio is 0 for all groups except the virtually abelian ones, and confirm this conjecture for certain residually finite groups of subexponential growth, hyperbolic +groups, right-angled artin groups, and the lamplighter group.",4 +"abstract +we propose a new paradigm for telecommunications, and develop a framework drawing on concepts +from information (i.e., different metrics of complexity) and computational (i.e., agent based modeling) +theory, adapted from complex system science. we proceed in a systematic fashion by dividing network +complexity understanding and analysis into different layers. modelling layer forms the foundation of the +proposed framework, supporting analysis and tuning layers. the modelling layer aims at capturing the +significant attributes of networks and the interactions that shape them, through the application of tools +such as agent-based modelling and graph theoretical abstractions, to derive new metrics that holistically +describe a network. the analysis phase completes the core functionality of the framework by linking our +new metrics to the overall network performance. the tuning layer augments this core with algorithms +that aim at automatically guiding networks toward desired conditions. in order to maximize the impact +of our ideas, the proposed approach is rooted in relevant, near-future architectures and use cases in 5g +networks, i.e., internet of things (iot) and self-organizing cellular networks. +index terms +complex systems science, agent-based modelling, self-organization, 5g, internet of things.",3 +"abstract— analyzing and reconstructing driving scenarios is +crucial for testing and evaluating highly automated vehicles +(havs). this research analyzed left-turn / straight-driving conflicts at unprotected intersections by extracting actual vehicle +motion data from a naturalistic driving database collected by +the university of michigan. nearly 7,000 left turn across path +- opposite direction (ltap/od) events involving heavy trucks +and light vehicles were extracted and used to build a stochastic +model of such ltap/od scenario, which is among the top +priority light-vehicle pre-crash scenarios identified by national +highway traffic safety administration (nhtsa). statistical +analysis showed that vehicle type is a significant factor, whereas +the change of season seems to have limited influence on the +statistical nature of the conflict. the results can be used to +build testing environments for havs to simulate the ltap/od +crash cases in a stochastic manner.",3 +"abstract—a revised incremental conductance (inccond) +maximum power point tracking (mppt) algorithm for pv +generation systems is proposed in this paper. the commonly +adopted traditional inccond method uses a constant step size +for voltage adjustment and is difficult to achieve both a good +tracking performance and quick elimination of the oscillations, +especially under the dramatic changes of the environment +conditions. for the revised algorithm, the incremental voltage +change step size is adaptively adjusted based on the slope of the +power-voltage (p-v) curve. an accelerating factor and a +decelerating factor are further applied to adjust the voltage step +change considering whether the sign of the p-v curve slope +remains the same or not in a subsequent tracking step. in +addition, the upper bound of the maximum voltage step change +is also updated considering the information of sign changes. the +revised mppt algorithm can quickly track the maximum power +points (mpps) and remove the oscillation of the actual operation +points around the real mpps. the effectiveness of the revised +algorithm is demonstrated using a simulation. +index terms—inccond mppt algorithm, fractional opencircuit/short-circuit mppt algorithm, p&o mptt algorithm, +solar pv generation.",5 +abstract,1 +"abstract +canonical correlation analysis (cca) is a fundamental statistical tool for exploring the +correlation structure between two sets of random variables. in this paper, motivated by the +recent success of applying cca to learn low dimensional representations of high dimensional +objects, we propose two losses based on the principal angles between the model spaces spanned +by the sample canonical variates and their population correspondents, respectively. we further +characterize the non-asymptotic error bounds for the estimation risks under the proposed error +metrics, which reveal how the performance of sample cca depends adaptively on key quantities +including the dimensions, the sample size, the condition number of the covariance matrices and +particularly the population canonical correlation coefficients. the optimality of our uniform +upper bounds is also justified by lower-bound analysis based on stringent and localized parameter +spaces. to the best of our knowledge, for the first time our paper separates p1 and p2 for the +first order term in the upper bounds without assuming the residual correlations are zeros. more +significantly, our paper derives p1 ´ λ2k qp1 ´ λ2k`1 q{pλk ´ λk`1 q2 for the first time in the nonasymptotic cca estimation convergence rates, which is essential to understand the behavior of +cca when the leading canonical correlation coefficients are close to 1.",10 +"abstract +in the realm of multimodal communication, sign +language is, and continues to be, one of the most +understudied areas. in line with recent advances in +the field of deep learning, there are far reaching +implications and applications that neural networks can +have for sign language interpretation. in this paper, we +present a method for using deep convolutional +networks to classify images of both the the letters and +digits​ ​in​ ​american​ ​sign​ ​language. +1.​ ​introduction +sign​ ​language​ ​is​ ​a​ ​unique​ ​type​ ​of​ ​communication​ ​that +often​ ​goes​ ​understudied.​ ​while​ ​the​ ​translation​ ​process +between​ ​signs​ ​and​ ​a​ ​spoken​ ​or​ ​written​ ​language​ ​is​ ​formally +called​ ​‘interpretation,’​ ​ ​the​ ​function​ ​that​ ​interpreting​ ​plays +is​ ​the​ ​same​ ​as​ ​that​ ​of​ ​translation​ ​for​ ​a​ ​spoken​ ​language.​ ​in +our​ ​research,​ ​we​ ​look​ ​at​ ​american​ ​sign​ ​language​ ​(asl), +which​ ​is​ ​used​ ​in​ ​the​ ​usa​ ​and​ ​in​ ​english-speaking​ ​canada +and​ ​has​ ​many​ ​different​ ​dialects.​ ​there​ ​are​ ​22​ ​handshapes +that​ ​correspond​ ​to​ ​the​ ​26​ ​letters​ ​of​ ​the​ ​alphabet,​ ​and​ ​you +can​ ​sign​ ​the​ ​10​ ​digits​ ​on​ ​one​ ​hand.",1 +"abstract +a systematic convolutional encoder of rate (n − 1)/n and maximum degree d generates a code of free distance at most +d = d + 2 and, at best, a column distance profile (cdp) of [2, 3, . . . , d]. a code is maximum distance separable (mds) if it +possesses this cdp. applied on a communication channel over which packets are transmitted sequentially and which loses (erases) +packets randomly, such a code allows the recovery from any pattern of j erasures in the first j n-packet blocks for j < d, with +a delay of at most j blocks counting from the first erasure. this paper addresses the problem of finding the largest d for which +a systematic rate (n − 1)/n code over gf(2m ) exists, for given n and m. in particular, constructions for rates (2m − 1)/2m and +(2m−1 − 1)/2m−1 are presented which provide optimum values of d equal to 3 and 4, respectively. a search algorithm is also +developed, which produces new codes for d for field sizes 2m ≤ 214 . using a complete search version of the algorithm, the +maximum value of d, and codes that achieve it, are determined for all code rates ≥ 1/2 and every field size gf(2m ) for m ≤ 5 +(and for some rates for m = 6).",7 +"abstract. aschbacher’s program for the classification of simple fusion systems of “odd” +type at the prime 2 has two main stages: the classification of 2-fusion systems of subintrinsic component type and the classification of 2-fusion systems of j-component type. we +make a contribution to the latter stage by classifying 2-fusion systems with a j-component +isomorphic to the 2-fusion systems of several sporadic groups under the assumption that +the centralizer of this component is cyclic.",4 +"abstract +we consider the problem of predicting the next observation given a sequence of past observations, and consider the extent to which accurate prediction requires complex algorithms that +explicitly leverage long-range dependencies. perhaps surprisingly, our positive results show that +for a broad class of sequences, there is an algorithm that predicts well on average, and bases +its predictions only on the most recent few observation together with a set of simple summary +statistics of the past observations. specifically, we show that for any distribution over observations, if the mutual information between past observations and future observations is upper +bounded by i, then a simple markov √ +model over the most recent i/ observations obtains expected kl error —and hence `1 error —with respect to the optimal predictor that has access +to the entire past and knows the data generating distribution. for a hidden markov model with +n hidden states, i is bounded by log n, a quantity that does not depend on the mixing time, +and we show that the trivial prediction algorithm based on the empirical frequencies of length +o(log n/) windows of observations achieves this error, provided the length of the sequence is +dω(log n/) , where d is the size of the observation alphabet. +we also establish that this result cannot be improved upon, even for the class of hmms, +in the following two senses: first, for hmms with n hidden states, a window length +√ of log n/ +is information-theoretically necessary to achieve expected kl error , or `1 error . second, +the dθ(log n/) samples required to accurately estimate the markov model when observations +are drawn from an alphabet of size d is necessary for any computationally tractable learning/prediction algorithm, assuming the hardness of strongly refuting a certain class of csps.",2 +"abstract +this paper proposes an efficient and novel method to address range search on multidimensional points in θ(t) time, where t is the number of points reported in 1, then it is possible +to exhibit consistent tests. in this contribution, we prove a contrario that under the condition λ1 (x0 ) < 1, there are no consistent +tests. our proof is inspired by previous works devoted to the case of rank 1 matrices x0 . +index terms +statistical detection tests, large random matrices, large deviation principle.",10 +"abstract +automated analysis methods are crucial aids for monitoring +and defending a network to protect the sensitive or confidential data it hosts. this work introduces a flexible, powerful, +and unsupervised approach to detecting anomalous behavior in computer and network logs; one that largely eliminates +domain-dependent feature engineering employed by existing +methods. by treating system logs as threads of interleaved +“sentences” (event log lines) to train online unsupervised neural network language models, our approach provides an adaptive model of normal network behavior. we compare the effectiveness of both standard and bidirectional recurrent neural network language models at detecting malicious activity +within network log data. extending these models, we introduce a tiered recurrent architecture, which provides context +by modeling sequences of users’ actions over time. compared to isolation forest and principal components analysis, two popular anomaly detection algorithms, we observe +superior performance on the los alamos national laboratory cyber security dataset. for log-line-level red team detection, our best performing character-based model provides +test set area under the receiver operator characteristic curve +of 0.98, demonstrating the strong fine-grained anomaly detection performance of this approach on open vocabulary logging sources.",9 +"abstract—when a human matches two images, the viewer has +a natural tendency to view the wide area around the target +pixel to obtain clues of right correspondence. however, designing +a matching cost function that works on a large window in +the same way is difficult. the cost function is typically not +intelligent enough to discard the information irrelevant to the +target pixel, resulting in undesirable artifacts. in this paper, we +propose a novel convolutional neural network (cnn) module to +learn a stereo matching cost with a large-sized window. unlike +conventional pooling layers with strides, the proposed per-pixel +pyramid-pooling layer can cover a large area without a loss +of resolution and detail. therefore, the learned matching cost +function can successfully utilize the information from a large area +without introducing the fattening effect. the proposed method is +robust despite the presence of weak textures, depth discontinuity, +illumination, and exposure difference. the proposed method +achieves near-peak performance on the middlebury benchmark. +index terms—stereo matching,pooling,cnn",1 +"abstract +this paper considers two brownian motions in a situation where one is correlated to the other with a slight +delay. we study the problem of estimating the time lag parameter between these brownian motions from their +high-frequency observations, which are possibly subject to measurement errors. the measurement errors are +assumed to be i.i.d., centered gaussian and independent of the latent processes. we investigate the asymptotic +structure of the likelihood ratio process for this model when the lag parameter is asymptotically infinitesimal. +we show that the structure of the limit experiment depends on the level of the measurement errors: if the measurement errors locally dominate the latent brownian motions, the model enjoys the lan property. otherwise, +the limit experiment does not result in typical ones appearing in the literature. we also discuss the efficient +estimation of the lag parameter to highlight the statistical implications. +keywords and phrases: asymptotic efficiency; endogenous noise; lead-lag effect; local asymptotic normality; +microstructure noise.",10 +"abstract +we provide a new computationally-efficient class of estimators for risk minimization. +we show that these estimators are robust for general statistical models: in the classical huber ǫ-contamination model and in heavy-tailed settings. our workhorse is a novel +robust variant of gradient descent, and we provide conditions under which our gradient descent variant provides accurate estimators in a general convex risk minimization problem. +we provide specific consequences of our theory for linear regression, logistic regression +and for estimation of the canonical parameters in an exponential family. these results +provide some of the first computationally tractable and provably robust estimators for +these canonical statistical models. finally, we study the empirical performance of our +proposed methods on synthetic and real datasets, and find that our methods convincingly +outperform a variety of baselines.",2 +"abstract +this work introduces a novel framework for quantifying the presence and strength of recurrent +dynamics in video data. specifically, we provide continuous measures of periodicity (perfect repetition) and quasiperiodicity (superposition of periodic modes with non-commensurate periods), in +a way which does not require segmentation, training, object tracking or 1-dimensional surrogate +signals. our methodology operates directly on video data. the approach combines ideas from +nonlinear time series analysis (delay embeddings) and computational topology (persistent homology), by translating the problem of finding recurrent dynamics in video data, into the problem +of determining the circularity or toroidality of an associated geometric space. through extensive +testing, we show the robustness of our scores with respect to several noise models/levels; we show +that our periodicity score is superior to other methods when compared to human-generated periodicity rankings; and furthermore, we show that our quasiperiodicity score clearly indicates the +presence of biphonation in videos of vibrating vocal folds, which has never before been accomplished end to end quantitatively.",1 +"abstract +there is no denying the tremendous leap in the performance of machine learning methods in the past half-decade. some might even say that specific sub-fields +in pattern recognition, such as machine-vision, are as good as solved, reaching +human and super-human levels. arguably, lack of training data and computation +power are all that stand between us and solving the remaining ones. in this position paper we underline cases in vision which are challenging to machines and +even to human observers. this is to show limitations of contemporary models that +are hard to ameliorate by following the current trend to increase training data, network capacity or computational power. moreover, we claim that attempting to do +so is in principle a suboptimal approach. we provide a taster of such examples in +hope to encourage and challenge the machine learning community to develop new +directions to solve the said difficulties.",1 +"abstract +motivated by recent work on ordinal embedding (kleindessner and von luxburg, 2014), we +derive large sample consistency results and rates of convergence for the problem of embedding +points based on triple or quadruple distance comparisons. we also consider a variant of this +problem where only local comparisons are provided. finally, inspired by (jamieson and nowak, +2011), we bound the number of such comparisons needed to achieve consistency. +keywords: ordinal embedding, non-metric multidimensional scaling (mds), dissimilarity comparisons, landmark multidimensional scaling.",10 +"abstract +we study one dimension in program evolution, namely the evolution of the datatype declarations in a program. to this end, a suite of basic transformation operators is designed. we +cover structure-preserving refactorings, but also structure-extending and -reducing adaptations. both the object programs that are subject to datatype transformations, and the meta +programs that encode datatype transformations are functional programs.",6 +"abstract +in some applications, the variance of additive measurement noise depends on the signal that +we aim to measure. for instance, additive gaussian signal-dependent noise (agsdn) channel +models are used in molecular and optical communication. herein we provide lower and upper +bounds on the capacity of additive signal-dependent noise (asdn) channels. the idea of the +first lower bound is the extension of the majorization inequality, and for the second one, it +uses some calculations based on the fact that h (y ) > h (y |z). both of them are valid for +all additive signal-dependent noise (asdn) channels defined in the paper. the upper bound +is based on a previous idea of the authors (“symmetric relative entropy”) and is used for the +additive gaussian signal-dependent noise (agsdn) channels. these bounds indicate that in +asdn channels (unlike the classical awgn channels), the capacity does not necessarily become +larger by making the variance function of the noise smaller. we also provide sufficient conditions +under which the capacity becomes infinity. this is complemented by a number of conditions +that imply capacity is finite and a unique capacity achieving measure exists (in the sense of the +output measure). +keywords: signal-dependent noise channels, molecular communication, channels with infinite capacity, existence of capacity-achieving distribution.",7 +"abstract +we introduce a new model of stochastic bandits with adversarial corruptions which aims to +capture settings where most of the input follows a stochastic pattern but some fraction of it can +be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. +the goal of this model is to encourage the design of bandit algorithms that (i) work well in +mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as +we move from fully stochastic to fully adversarial models. +in our model, the rewards for all arms are initially drawn from a distribution and are then +altered by an adaptive adversary. we provide a simple algorithm whose performance gracefully +degrades with the total corruption the adversary injected in the data, measured by the sum +across rounds of the biggest alteration the adversary made in the data in that round; this +total corruption is denoted by c. our algorithm provides a guarantee that retains the optimal +guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades +linearly to the amount of corruption c, while crucially being agnostic to it. we also provide a +lower bound showing that this linear degradation is necessary if the algorithm achieves optimal +performance in the stochastic setting (the lower bound works even for a known amount of +corruption, a special case in which our algorithm achieves optimal performance without the +extra logarithm).",8 +"abstract +rectified linear units, or relus, have become the preferred activation function for artificial neural +networks. in this paper we consider two basic learning problems assuming that the underlying data +follow a generative model based on a relu-network – a neural network with relu activations. as a +primarily theoretical study, we limit ourselves to a single-layer network. the first problem we study +corresponds to dictionary-learning in the presence of nonlinearity (modeled by the relu functions). +given a set of observation vectors yi ∈ rd , i = 1, 2, . . . , n, we aim to recover d × k matrix a and the +latent vectors {ci } ⊂ rk under the model yi = relu(aci + b), where b ∈ rd is a random bias. we +show that it is possible to recover the column space of a within an error of o(d) (in frobenius norm) +under certain conditions on the probability distribution of b. +the second problem we consider is that of robust recovery of the signal in the presence of outliers, +i.e., large but sparse noise. in this setting we are interested in recovering the latent vector c from its +noisy nonlinear sketches of the form v = relu(ac) + e + w, where e ∈ rd denotes the outliers with +sparsity s and w ∈ rd denote the dense but small noise. this line of work has recently been studied +(soltanolkotabi, 2017) without the presence of outliers. for this problem, we q +show that a generalized +lasso algorithm is able to recover the signal c ∈ rk within an ℓ2 error of o( +random gaussian matrix.",7 +"abstract +in reinforcement learning, agents learn by performing actions and observing their +outcomes. sometimes, it is desirable for a human operator to interrupt an agent +in order to prevent dangerous situations from happening. yet, as part of their +learning process, agents may link these interruptions, that impact their reward, to +specific states and deliberately avoid them. the situation is particularly challenging in a multi-agent context because agents might not only learn from their own +past interruptions, but also from those of other agents. orseau and armstrong [16] +defined safe interruptibility for one learner, but their work does not naturally extend to multi-agent systems. this paper introduces dynamic safe interruptibility, +an alternative definition more suited to decentralized learning problems, and studies this notion in two learning frameworks: joint action learners and independent +learners. we give realistic sufficient conditions on the learning algorithm to enable dynamic safe interruptibility in the case of joint action learners, yet show that +these conditions are not sufficient for independent learners. we show however that +if agents can detect interruptions, it is possible to prune the observations to ensure +dynamic safe interruptibility even for independent learners.",2 +"abstract in this article, we consider the general problem of checking the +correctness of matrix multiplication. given three n × n matrices a, b, and +c, the goal is to verify that a × b = c without carrying out the computationally costly operations of matrix multiplication and comparing the product +a × b with c, term by term. this is especially important when some or all of +these matrices are very large, and when the computing environment is prone +to soft errors. here we extend freivalds’ algorithm to a gaussian variant of +freivalds’ algorithm (gvfa) by projecting the product a × b as well as c +onto a gaussian random vector and then comparing the resulting vectors. the +computational complexity of gvfa is consistent with that of freivalds’ algorithm, which is o(n2 ). however, unlike freivalds’ algorithm, whose probability +of a false positive is 2−k , where k is the number of iterations. our theoretical +analysis shows that when a × b 6= c, gvfa produces a false positive on set +of inputs of measure zero with exact arithmetic. when we introduce round-off +error and floating point arithmetic into our analysis, we can show that the +larger this error, the higher the probability that gvfa avoids false positives. +hao ji +department of computer science +old dominion university +e-mail: hji@cs.odu.edu +michael mascagni +departments of computer science, mathematics, and scientific computing +florida state university +applied and computational mathematics division +national institute of standards and technology +e-mail: mascagni@fsu.edu +yaohang li +department of computer science +old dominion university +tel.: 757-683-7721 +fax: 757-683-4900 +e-mail: yaohang@cs.odu.edu",8 +"abstract. we offer a general bayes theoretic framework to tackle the +model selection problem under a two-step prior design: the first-step +prior serves to assess the model selection uncertainty, and the secondstep prior quantifies the prior belief on the strength of the signals within +the model chosen from the first step. +we establish non-asymptotic oracle posterior contraction rates under (i) a new bernstein-inequality condition on the log likelihood ratio +of the statistical experiment, (ii) a local entropy condition on the dimensionality of the models, and (iii) a sufficient mass condition on the +second-step prior near the best approximating signal for each model. +the first-step prior can be designed generically. the resulting posterior +mean also satisfies an oracle inequality, thus automatically serving as an +adaptive point estimator in a frequentist sense. model mis-specification +is allowed in these oracle rates. +the new bernstein-inequality condition not only eliminates the convention of constructing explicit tests with exponentially small type i +and ii errors, but also suggests the intrinsic metric to use in a given +statistical experiment, both as a loss function and as an entropy measurement. this gives a unified reduction scheme for many experiments +considered in [23] and beyond. as an illustration for the scope of our +general results in concrete applications, we consider (i) trace regression, +(ii) shape-restricted isotonic/convex regression, (iii) high-dimensional +partially linear regression and (iv) covariance matrix estimation in the +sparse factor model. these new results serve either as theoretical justification of practical prior proposals in the literature, or as an illustration +of the generic construction scheme of a (nearly) minimax adaptive estimator for a multi-structured experiment.",10 +"abstract. this paper introduces the first deep neural network-based estimation metric for the jigsaw puzzle problem. given two puzzle piece edges, +the neural network predicts whether or not they should be adjacent in the +correct assembly of the puzzle, using nothing but the pixels of each piece. +the proposed metric exhibits an extremely high precision even though no +manual feature extraction is performed. when incorporated into an existing puzzle solver, the solution’s accuracy increases significantly, achieving +thereby a new state-of-the-art standard.",1 +"abstract +this paper proposes distributed algorithms for multi-agent networks +to achieve a solution in finite time to a linear equation ax = b where a +has full row rank, and with the minimum l1 -norm in the underdetermined +case (where a has more columns than rows). the underlying network is +assumed to be undirected and fixed, and an analytical proof is provided for +the proposed algorithm to drive all agents’ individual states to converge +to a common value, viz a solution of ax = b, which is the minimum l1 norm solution in the underdetermined case. numerical simulations are +also provided as validation of the proposed algorithms.",3 +"abstract +this paper studies unmanned aerial vehicle (uav) aided wireless communication systems where +a uav supports uplink communications of multiple ground nodes (gns) while flying over the area of +the interest. in this system, the propulsion energy consumption at the uav is taken into account so that +the uav’s velocity and acceleration should not exceed a certain threshold. we formulate the minimum +average rate maximization problem and the energy efficiency (ee) maximization problem by jointly +optimizing the trajectory, velocity, and acceleration of the uav and the uplink transmit power at the gns. +as these problems are non-convex in general, we employ the successive convex approximation (sca) +techniques. to this end, proper convex approximations for the non-convex constraints are derived, and +iterative algorithms are proposed which converge to a local optimal point. numerical results demonstrate +that the proposed algorithms outperform baseline schemes for both problems. especially for the ee +maximization problem, the proposed algorithm exhibits about 109 % gain over the baseline scheme.",7 +"abstract. many high-dimensional uncertainty quantification problems are solved by polynomial dimensional decomposition (pdd), which represents fourier-like series expansion in terms of random +orthonormal polynomials with increasing dimensions. this study constructs dimension-wise and +orthogonal splitting of polynomial spaces, proves completeness of polynomial orthogonal basis +for prescribed assumptions, and demonstrates mean-square convergence to the correct limit – all +associated with pdd. a second-moment error analysis reveals that pdd cannot commit larger +error than polynomial chaos expansion (pce) for the appropriately chosen truncation parameters. from the comparison of computational efforts, required to estimate with the same precision +the variance of an output function involving exponentially attenuating expansion coefficients, the +pdd approximation can be markedly more efficient than the pce approximation. +key words. uncertainty quantification, anova decomposition, multivariate orthogonal polynomials, polynomial chaos expansion.",10 +"abstract +over the recent years, the field of whole metagenome shotgun sequencing has witnessed significant +growth due to the high-throughput sequencing technologies that allow sequencing genomic samples cheaper, +faster, and with better coverage than before. this technical advancement has initiated the trend of sequencing multiple samples in different conditions or environments to explore the similarities and dissimilarities +of the microbial communities. examples include the human microbiome project and various studies of the +human intestinal tract. with the availability of ever larger databases of such measurements, finding samples +similar to a given query sample is becoming a central operation. in this paper, we develop a content-based +exploration and retrieval method for whole metagenome sequencing samples. we apply a distributed string +mining framework to efficiently extract all informative sequence k-mers from a pool of metagenomic samples and use them to measure the dissimilarity between two samples. we evaluate the performance of +the proposed approach on two human gut metagenome data sets as well as human microbiome project +metagenomic samples. we observe significant enrichment for diseased gut samples in results of queries +with another diseased sample and very high accuracy in discriminating between different body sites even +though the method is unsupervised. a software implementation of the dsm framework is available at +https://github.com/hiitmetagenomics/dsm-framework.",5 +"abstract. we study lattice embeddings for the class of countable groups γ +defined by the property that the largest amenable uniformly recurrent subgroup +aγ is continuous. when aγ comes from an extremely proximal action and +the envelope of aγ is co-amenable in γ, we obtain restrictions on the locally +compact groups g that contain a copy of γ as a lattice, notably regarding +normal subgroups of g, product decompositions of g, and more generally dense +mappings from g to a product of locally compact groups. +we then focus on a family of finitely generated groups acting on trees within +this class, and show that these embed as cocompact irreducible lattices in some +locally compact wreath products. this provides examples of finitely generated +simple groups quasi-isometric to a wreath product c ≀ f , where c is a finite +group and f a non-abelian free group. +keywords. lattices, locally compact groups, strongly proximal actions, chabauty +space, groups acting on trees, irreducible lattices in wreath products.",4 +"abstract—in a typical multitarget tracking (mtt) scenario, +the sensor state is either assumed known, or tracking is performed based on the sensor’s (relative) coordinate frame. this +assumption becomes violated when the mtt sensor, such as a +vehicular radar, is mounted on a vehicle, and the target state +should be represented in a global (absolute) coordinate frame. +then it is important to consider the uncertain sensor location for +mtt. furthermore, in a multisensor scenario, where multiple +sensors observe a common set of targets, state information from +one sensor can be utilized to improve the state of another sensor. +in this paper, we present a poisson multi-bernoulli mtt filter, +which models the uncertain sensor state. the multisensor case +is addressed in an asynchronous way, where measurements are +incorporated sequentially based on the arrival of new sensor +measurements. in doing so, targets observed from a well localized +sensor reduce the state uncertainty at another poorly localized +sensor, provided that a common non-empty subset of features +is observed. the proposed mtt filter has low computational +demands due to its parametric implementation. +numerical results demonstrate the performance benefits of modeling the uncertain sensor state in feature tracking as well as the +reduction of sensor state uncertainty in a multisensor scenario +compared to a per sensor kalman filter. scalability results display +the linear increase of computation time with number of sensors +or features present.",3 +"abstract +solving #sat problems is an important area of work. in this paper, we discuss implementing +tetris, an algorithm originally designed for handling natural joins, as an exact model counter for the +#sat problem. tetris uses a simple geometric framework, yet manages to achieve the fractional +hypertree-width bound. its design allows it to handle complex problems involving extremely large +numbers of clauses on which other state-of-the-art model counters do not perform well, yet still performs strongly on standard sat benchmarks. +we have achieved the following objectives. first, we have found a natural set of model counting benchmarks on which tetris outperforms other model counters. second, we have constructed +a data structure capable of efficiently handling and caching all of the data tetris needs to work on +over the course of the algorithm. third, we have modified tetris in order to move from a theoretical, +asymptotic-time-focused environment to one that performs well in practice. in particular, we have +managed to produce results keeping us within a single order of magnitude as compared to other +solvers on most benchmarks, and outperform those solvers by multiple orders of magnitude on others.",8 +"abstract +we propose a new grayscale image denoiser, dubbed as neural affine image +denoiser (neural aide), which utilizes neural network in a novel way. unlike +other neural network based image denoising methods, which typically apply simple +supervised learning to learn a mapping from a noisy patch to a clean patch, we +formulate to train a neural network to learn an affine mapping that gets applied +to a noisy pixel, based on its context. our formulation enables both supervised +training of the network from the labeled training dataset and adaptive fine-tuning +of the network parameters using the given noisy image subject to denoising. the +key tool for devising neural aide is to devise an estimated loss function of +the mse of the affine mapping, solely based on the noisy data. as a result, our +algorithm can outperform most of the recent state-of-the-art methods in the standard +benchmark datasets. moreover, our fine-tuning method can nicely overcome one of +the drawbacks of the patch-level supervised learning methods in image denoising; +namely, a supervised trained model with a mismatched noise variance can be mostly +corrected as long as we have the matched noise variance during the fine-tuning +step.",1 +"abstract— we explore the problem of classification within a +medical image data-set based on a feature vector extracted from +the deepest layer of pre-trained convolution neural networks. +we have used feature vectors from several pre-trained structures, +including networks with/without transfer learning to evaluate +the performance of pre-trained deep features versus cnns +which have been trained by that specific dataset as well as the +impact of transfer learning with a small number of samples. all +experiments are done on kimia path24 dataset which consists of +27,055 histopathology training patches in 24 tissue texture classes +along with 1,325 test patches for evaluation. the result shows that +pre-trained networks are quite competitive against training from +scratch. as well, fine-tuning does not seem to add any tangible +improvement for vgg16 to justify additional training while we +observed considerable improvement in retrieval and classification +accuracy when we fine-tuned the inception structure. +keywords— image retrieval, medical imaging, deep learning, +cnns, digital pathology, image classification, deep features, vgg, +inception.",1 +"abstract. in algebra such as algebraic geometry, modular representation theory and +commutative ring theory, we study algebraic objects through associated triangulated +categories and topological spaces. in this paper, we consider the relationship between +such triangulated categories and topological spaces. to be precise, we explore necessary +conditions for derived equivalence of noetherian schemes, stable equivalence of finite +groups, and singular equivalence of commutative noetherian rings by using associated +topological spaces.",0 +"abstract +the present paper considers a hybrid local search approach to the eternity +ii puzzle and to unsigned, rectangular, edge matching puzzles in general. +both an original mixed-integer linear programming (milp) formulation and +a novel max-clique formulation are presented for this np-hard problem. although the presented formulations remain computationally intractable for +medium and large sized instances, they can serve as the basis for developing heuristic decompositions and very large scale neighbourhoods. as a side +product of the max-clique formulation, new hard-to-solve instances are published for the academic research community. two reasonably well performing +milp-based constructive methods are presented and used for determining +the initial solution of a multi-neighbourhood local search approach. experimental results confirm that this local search can further improve the results +obtained by the constructive heuristics and is quite competitive with the +state of the art procedures. +keywords: edge matching puzzle, hybrid approach, local search +1. introduction +the eternity ii puzzle (eii) is a commercial edge matching puzzle in which +256 square tiles with four coloured edges must be arranged on a 16 × 16 grid +∗",8 +"abstract +structured prediction is concerned with predicting +multiple inter-dependent labels simultaneously. +classical methods like crf achieve this by maximizing a score function over the set of possible +label assignments. recent extensions use neural +networks to either implement the score function +or in maximization. the current paper takes an alternative approach, using a neural network to generate the structured output directly, without going +through a score function. we take an axiomatic +perspective to derive the desired properties and +invariances of a such network to certain input permutations, presenting a structural characterization +that is provably both necessary and sufficient. we +then discuss graph-permutation invariant (gpi) +architectures that satisfy this characterization and +explain how they can be used for deep structured +prediction. we evaluate our approach on the challenging problem of inferring a scene graph from +an image, namely, predicting entities and their +relations in the image. we obtain state-of-the-art +results on the challenging visual genome benchmark, outperforming all recent approaches.",1 +"abstract +this paper provides conditions under which a non-stationary copula-based markov process +is β-mixing. we introduce, as a particular case, a convolution-based gaussian markov process +which generalizes the standard random walk allowing the increments to be dependent.",10 +"abstract as intelligent systems gain autonomy and capability, it becomes vital to +ensure that their objectives match those of their human users; this is known as the +value-alignment problem. in robotics, value alignment is key to the design of collaborative robots that can integrate into human workflows, successfully inferring and +adapting to their users’ objectives as they go. we argue that a meaningful solution to +value alignment must combine multi-agent decision theory with rich mathematical +models of human cognition, enabling robots to tap into people’s natural collaborative capabilities. we present a solution to the cooperative inverse reinforcement +learning (cirl) dynamic game based on well-established cognitive models of decision making and theory of mind. the solution captures a key reciprocity relation: the +human will not plan her actions in isolation, but rather reason pedagogically about +how the robot might learn from them; the robot, in turn, can anticipate this and interpret the human’s actions pragmatically. to our knowledge, this work constitutes the +first formal analysis of value alignment grounded in empirically validated cognitive +models. +key words: value alignment, human-robot interaction, dynamic game theory",2 +"abstract—generative network models play an important role +in algorithm development, scaling studies, network analysis, +and realistic system benchmarks for graph data sets. the +commonly used graph-based benchmark model r-mat has +some drawbacks concerning realism and the scaling behavior +of network properties. a complex network model gaining +considerable popularity builds random hyperbolic graphs, generated by distributing points within a disk in the hyperbolic +plane and then adding edges between points whose hyperbolic +distance is below a threshold. we present in this paper a +fast generation algorithm for such graphs. our experiments +show that our new generator achieves speedup factors of 3-60 +over the best previous implementation. one billion edges can +now be generated in under one minute on a shared-memory +workstation. furthermore, we present a dynamic extension to +model gradual network change, while preserving at each step +the point position probabilities.",8 +"abstract. in 1982, drezner proposed the (1|1)-centroid problem on the +plane, in which two players, called the leader and the follower, open +facilities to provide service to customers in a competitive manner. the +leader opens the first facility, and the follower opens the second. each +customer will patronize the facility closest to him (ties broken in favor +of the first one), thereby decides the market share of the two facilities. +the goal is to find the best position for the leader’s facility so that its +market share is maximized. the best algorithm of this problem is an +o(n2 log n)-time parametric search approach, which searches over the +space of market share values. +in the same paper, drezner also proposed a general version of (1|1)centroid problem by introducing a minimal distance constraint r, such +that the follower’s facility is not allowed to be located within a distance +r from the leader’s. he proposed an o(n5 log n)-time algorithm for this +general version by identifying o(n4 ) points as the candidates of the optimal solution and checking the market share for each of them. in this +paper, we develop a new parametric search approach searching over the +o(n4 ) candidate points, and present an o(n2 log n)-time algorithm for +the general version, thereby close the o(n3 ) gap between the two bounds. +keywords: competitive facility, euclidean plane, parametric search",8 +"abstract +a new design methodology is introduced, with some examples on building domain +specific languages hierarchy on top of scheme.",2 +"abstract. in this paper we introduce associative commutative distributive term rewriting (acdtr), a rewriting language for rewriting +logical formulae. acdtr extends ac term rewriting by adding distribution of conjunction over other operators. conjunction is vital for expressive term rewriting systems since it allows us to require that multiple +conditions hold for a term rewriting rule to be used. acdtr uses the +notion of a “conjunctive context”, which is the conjunction of constraints +that must hold in the context of a term, to enable the programmer to +write very expressive and targeted rewriting rules. acdtr can be seen +as a general logic programming language that extends constraint handling rules and ac term rewriting. in this paper we define the semantics +of acdtr and describe our prototype implementation.",6 +"abstract +this paper addresses an open problem in traffic modeling: the second-order macroscopic node problem. a +second-order macroscopic traffic model, in contrast to a first-order model, allows for variation of driving behavior +across subpopulations of vehicles in the flow. the second-order models are thus more descriptive (e.g., they have +been used to model variable mixtures of behaviorally-different traffic, like car/truck traffic, autonomous/humandriven traffic, etc.), but are much more complex. the second-order node problem is a particularly complex +problem, as it requires the resolution of discontinuities in traffic density and mixture characteristics, and solving +of throughflows for arbitrary numbers of input and output roads to a node (in other words, this is an arbitrarydimensional riemann problem with two conserved quantities). we propose a solution to this problem by making +use of a recently-introduced dynamic system characterization of the first-order node model problem, which gives +insight and intuition as to the continuous-time dynamics implicit in first-order node models. we use this intuition +to extend the dynamic system node model to the second-order setting. we also extend the well-known “generic +class of node model” constraints to the second order and present a simple solution algorithm to the second-order +node problem. this node model has immediate applications in allowing modeling of behaviorally-complex traffic +flows of contemporary interest (like partially-autonomous-vehicle flows) in arbitrary road networks.",3 +"abstract—in this paper, we tackle the direct and inverse +problems for the remote-field eddy-current (rfec) technology. +the direct problem is the sensor model, where given the geometry +the measurements are obtained. conversely, the inverse problem +is where the geometry needs to be estimated given the field +measurements. these problems are particularly important in +the field of non-destructive testing (ndt) because they allow +assessing the quality of the structure monitored. we solve the +direct problem in a parametric fashion using least absolute +shrinkage and selection operation (lasso). the proposed +inverse model uses the parameters from the direct model to +recover the thickness using least squares producing the optimal +solution given the direct model. this study is restricted to the +2d axisymmetric scenario. both, direct and inverse models, are +validated using a finite element analysis (fea) environment +with realistic pipe profiles.",3 +"abstract. this is a continuation of a previous paper by the same authors. +in the former paper, it was proved that in order to obtain local uniformization +for valuations centered on local domains, it is enough to prove it for rank +one valuations. in this paper, we extend this result to the case of valuations +centered on rings which are not necessarily integral domains and may even +contain nilpotents.",0 +"abstract +dropout is a popular technique for regularizing artificial neural networks. dropout +networks are generally trained by minibatch gradient descent with a dropout mask +turning off some of the units—a different pattern of dropout is applied to every +sample in the minibatch. we explore a very simple alternative to the dropout mask. +instead of masking dropped out units by setting them to zero, we perform matrix +multiplication using a submatrix of the weight matrix—unneeded hidden units are +never calculated. performing dropout batchwise, so that one pattern of dropout is +used for each sample in a minibatch, we can substantially reduce training times. +batchwise dropout can be used with fully-connected and convolutional neural networks.",9 +"abstract +we consider the problem faced by a service platform that needs to match supply with +demand, but also to learn attributes of new arrivals in order to match them better in the +future. we introduce a benchmark model with heterogeneous workers and jobs that arrive +over time. job types are known to the platform, but worker types are unknown and must be +learned by observing match outcomes. workers depart after performing a certain number +of jobs. the payoff from a match depends on the pair of types and the goal is to maximize +the steady-state rate of accumulation of payoff. +our main contribution is a complete characterization of the structure of the optimal policy in the limit that each worker performs many jobs. the platform faces a trade-off for each +worker between myopically maximizing payoffs (exploitation) and learning the type of the +worker (exploration). this creates a multitude of multi-armed bandit problems, one for each +worker, coupled together by the constraint on availability of jobs of different types (capacity +constraints). we find that the platform should estimate a shadow price for each job type, +and use the payoffs adjusted by these prices, first, to determine its learning goals and then, +for each worker, (i) to balance learning with payoffs during the “exploration phase”, and (ii) +to myopically match after it has achieved its learning goals during the “exploitation phase."" +keywords: matching, learning, two-sided platform, multi-armed bandit, capacity constraints.",8 +"abstract. a novel matching based heuristic algorithm designed to detect specially formulated infeasible {0, 1} ips is +presented. the algorithm’s input is a set of nested doubly stochastic subsystems and a set e of instance defining variables set +at zero level. the algorithm deduces additional variables at zero level until either a constraint is violated (the ip is infeasible), +or no more variables can be deduced zero (the ip is undecided). all feasible ips, and all infeasible ips not detected infeasible +are undecided. we successfully apply the algorithm to a small set of specially formulated infeasible {0, 1} ip instances of the +hamilton cycle decision problem. we show how to model both the graph and subgraph isomorphism decision problems for input +to the algorithm. increased levels of nested doubly stochastic subsystems can be implemented dynamically. the algorithm is +designed for parallel processing, and for inclusion of techniques in addition to matching.",8 +"abstract +we propose thalnet, a deep learning model inspired by neocortical communication +via the thalamus. our model consists of recurrent neural modules that send features +through a routing center, endowing the modules with the flexibility to share features +over multiple time steps. we show that our model learns to route information +hierarchically, processing input data by a chain of modules. we observe common +architectures, such as feed forward neural networks and skip connections, emerging +as special cases of our architecture, while novel connectivity patterns are learned +for the text8 compression task. our model outperforms standard recurrent neural +networks on several sequential benchmarks.",2 +"abstractions for concurrent +consensus⋆",6 +"abstract +we describe a class of systems theory based neural networks +called “network of recurrent neural networks” (nor), +which introduces a new structure level to rnn related models. in nor, rnns are viewed as the high-level neurons and +are used to build the high-level layers. more specifically, +we propose several methodologies to design different nor +topologies according to the theory of system evolution. then +we carry experiments on three different tasks to evaluate our +implementations. experimental results show our models outperform simple rnn remarkably under the same number of +parameters, and sometimes achieve even better results than +gru and lstm.",9 +"abstract. in [bc], the second de rham cohomology groups of nilpotent orbits in all the complex +simple lie algebras are described. in this paper we consider non-compact non-complex exceptional +lie algebras, and compute the dimensions of the second cohomology groups for most of the nilpotent +orbits. for the rest of cases of nilpotent orbits, which are not covered in the above computations, +we obtain upper bounds for the dimensions of the second cohomology groups.",4 +"abstract +let r be a commutative ring with identity and specs (m ) denote the +set all second submodules of an r-module m . in this paper, we construct and study a sheaf of modules, denoted by o(n, m ), on specs (m ) +equipped with the dual zariski topology of m , where n is an r-module. +we give a characterization of the sections of the sheaf o(n, m ) in terms +of the ideal transform module. we present some interrelations between +algebraic properties of n and the sections of o(n, m ). we obtain some +morphisms of sheaves induced by ring and module homomorphisms. +2010 mathematics subject classification: 13c13, 13c99, 14a15, +14a05. +keywords and phrases: second submodule, dual zariski topology, +sheaf of modules.",0 +"abstract +automatic multi-organ segmentation of the dual energy computed tomography (dect) data can be beneficial for biomedical research and clinical applications. however, it is a challenging task. recent advances in deep learning showed the +feasibility to use 3-d fully convolutional networks (fcn) +for voxel-wise dense predictions in single energy computed +tomography (sect). in this paper, we proposed a 3d fcn +based method for automatic multi-organ segmentation in +dect. the work was based on a cascaded fcn and a general model for the major organs trained on a large set of +sect data. we preprocessed the dect data by using linear +weighting and fine-tuned the model for the dect data. the +method was evaluated using 42 torso dect data acquired +with a clinical dual-source ct system. four abdominal organs (liver, spleen, left and right kidneys) were evaluated. +cross-validation was tested. effect of the weight on the accuracy was researched. in all the tests, we achieved an average +dice coefficient of 93% for the liver, 90% for the spleen, 91% +for the right kidney and 89% for the left kidney, respectively. +the results show our method is feasible and promising. +index terms— dect, deep learning, multi-organ segmentation, u-net +1. introduction +the hounsfield unit (hu) scale value depends on the inherent tissue properties, the x-ray spectrum for scanning and the +administered contrast media [1]. in a sect image, materials having different elemental compositions can be represented by identical hu values [2]. therefore, sect has challenges such as limited material-specific information and beam +hardening as well as tissue characterization [1]. dect has",1 +"abstract. we bring additional support to the conjecture saying that a rational +cuspidal plane curve is either free or nearly free. this conjecture was confirmed +for curves of even degree, and in this note we prove it for many odd degrees. in +particular, we show that this conjecture holds for the curves of degree at most 34.",0 +"abstract group g with the linear group ρ(g) ⊂ gl(v ). let vaff be the affine +space corresponding to v . the group of affine transformations of vaff whose linear part +lies in g may then be written g ⋉ v (where v stands for the group of translations). here +is the main result of this paper. +main theorem. suppose that ρ satisfies the following conditions: +(i) there exists a vector v ∈ v such that: +(a) ∀l ∈ l, l(v) = v, and +(b) w̃0 (v) 6= v, where w̃0 is any representative in g of w0 ∈ ng (a)/zg (a); +then there exists a subgroup γ in the affine group g ⋉ v whose linear part is zariskidense in g and that is free, nonabelian and acts properly discontinuously on the affine +space corresponding to v . +(note that the choice of the representative w̃0 in (i)(b) does not matter, precisely +because by (i)(a) the vector v is fixed by l = zg (a).) +remark 1.2. it is sufficient to prove the theorem in the case where ρ is irreducible. +indeed, we may decompose ρ into a direct sum of irreducible representations, and then +observe that: +• if some representation ρ1 ⊕ · · ·⊕ ρk has a vector (v1 , . . . , vk ) that satisfies conditions +(a) and (b), then at least one of the vectors vi must satisfy conditions (a) and (b); +• if v = v1 ⊕ v2 , and a subgroup γ ⊂ g ⋉ v1 acts properly on v1 , then its image i(γ) +by the canonical inclusion i : g ⋉ v1 → g ⋉ v still acts properly on v . +we shall start working with an arbitrary representation ρ, and gradually make stronger +and stronger hypotheses on it, introducing each one when we need it to make the construction work (so that it is at least partially motivated). here is the complete list of +places where new assumptions on ρ are introduced:",4 +"abstract +little by little, newspapers are revealing the bright future that artificial intelligence (ai) is building. intelligent machines will help everywhere. however, this +bright future has a dark side: a dramatic job market contraction before its unpredictable transformation. hence, in a near future, large numbers of job seekers +will need financial support while catching up with these novel unpredictable jobs. +this possible job market crisis has an antidote inside. in fact, the rise of ai is sustained by the biggest knowledge theft of the recent years. learning ai machines are +extracting knowledge from unaware skilled or unskilled workers by analyzing their +interactions. by passionately doing their jobs, these workers are digging their own +graves. +in this paper, we propose human-in-the-loop artificial intelligence (hit-ai) +as a fairer paradigm for artificial intelligence systems. hit-ai will reward aware +and unaware knowledge producers with a different scheme: decisions of ai systems +generating revenues will repay the legitimate owners of the knowledge used for taking +those decisions. as modern robin hoods, hit-ai researchers should fight for a fairer +artificial intelligence that gives back what it steals.",2 +"abstract. many results are known about test ideals and f -singularities for q-gorenstein +rings. in this paper we generalize many of these results to the case when the symbolic rees +algebra ox ⊕ ox (−kx ) ⊕ ox (−2kx ) ⊕ . . . is finitely generated (or more generally, in the +log setting for −kx − ∆). in particular, we show that the f -jumping numbers of τ (x, at ) +are discrete and rational. we show that test ideals τ (x) can be described by alterations +as in blickle-schwede-tucker (and hence show that splinters are strongly f -regular in this +setting – recovering a result of singh). we demonstrate that multiplier ideals reduce to +test ideals under reduction modulo p when the symbolic rees algebra is finitely generated. +we prove that hartshorne-speiser-lyubeznik-gabber type stabilization still holds. we +also show that test ideals satisfy global generation properties in this setting.",0 +"abstract—due to the huge availability of documents in digital +form, and the deception possibility raise bound to the essence of +digital documents and the way they are spread, the authorship +attribution problem has constantly increased its relevance. nowadays, authorship attribution, for both information retrieval and +analysis, has gained great importance in the context of security, +trust and copyright preservation. +this work proposes an innovative multi-agent driven machine +learning technique that has been developed for authorship attribution. by means of a preprocessing for word-grouping and timeperiod related analysis of the common lexicon, we determine a +bias reference level for the recurrence frequency of the words +within analysed texts, and then train a radial basis neural +networks (rbpnn)-based classifier to identify the correct author. +the main advantage of the proposed approach lies in the generality of the semantic analysis, which can be applied to different +contexts and lexical domains, without requiring any modification. +moreover, the proposed system is able to incorporate an external +input, meant to tune the classifier, and then self-adjust by means +of continuous learning reinforcement.",9 +"abstract—nondeterminism in scheduling is the cardinal reason +for difficulty in proving correctness of concurrent programs. +a powerful proof strategy was recently proposed [6] to show +the correctness of such programs. the approach captured dataflow dependencies among the instructions of an interleaved and +error-free execution of threads. these data-flow dependencies +were represented by an inductive data-flow graph (idfg), which, +in a nutshell, denotes a set of executions of the concurrent +program that gave rise to the discovered data-flow dependencies. +the idfgs were further transformed in to alternative finite +automatons (afas) in order to utilize efficient automata-theoretic +tools to solve the problem. in this paper, we give a novel and +efficient algorithm to directly construct afas that capture the +data-flow dependencies in a concurrent program execution. we +implemented the algorithm in a tool called prooftrapar to +prove the correctness of finite state cyclic programs under the +sequentially consistent memory model. our results are encouranging and compare favorably to existing state-of-the-art tools.",6 +"abstract. we get the computable error bounds for generalized +cornish-fisher expansions for quantiles of statistics provided that +the computable error bounds for edgeworth-chebyshev type expansions for distributions of these statistics are known. the results +are illustrated by examples.",10 +abstract,2 +"abstract +since its discovery, differential linear logic (dll) inspired numerous +domains. in denotational semantics, categorical models of dll are now +commune, and the simplest one is rel, the category of sets and relations. +in proof theory this naturally gave birth to differential proof nets that are +full and complete for dll. in turn, these tools can naturally be translated +to their intuitionistic counterpart. by taking the co-kleisly category associated to the ! comonad, rel becomes mrel, a model of the λ-calculus that +contains a notion of differentiation. proof nets can be used naturally to +extend the λ-calculus into the lambda calculus with resources, a calculus +that contains notions of linearity and differentiations. of course mrel is +a model of the λ-calculus with resources, and it has been proved adequate, +but is it fully abstract? +that was a strong conjecture of bucciarelli, carraro, ehrhard and +manzonetto in [4]. however, in this paper we exhibit a counter-example. +moreover, to give more intuition on the essence of the counter-example +and to look for more generality, we will use an extension of the resource +λ-calculus also introduced by bucciarelli et al in [4] for which m∞ is fully +abstract, the tests.",6 +"abstract. the whitney extension theorem is a classical result in analysis giving a necessary +and sufficient condition for a function defined on a closed set to be extendable to the whole +space with a given class of regularity. it has been adapted to several settings, among which +the one of carnot groups. however, the target space has generally been assumed to be equal +to rd for some d ≥ 1. +we focus here on the extendability problem for general ordered pairs (g1 , g2 ) (with g2 +non-abelian). we analyze in particular the case g1 = r and characterize the groups g2 for +which the whitney extension property holds, in terms of a newly introduced notion that we +call pliability. pliability happens to be related to rigidity as defined by bryant an hsu. we +exploit this relation in order to provide examples of non-pliable carnot groups, that is, carnot +groups so that the whitney extension property does not hold. we use geometric control theory +results on the accessibility of control affine systems in order to test the pliability of a carnot +group. in particular, we recover some recent results by le donne, speight and zimmermann +about lusin approximation in carnot groups of step 2 and whitney extension in heisenberg +groups. we extend such results to all pliable carnot groups, and we show that the latter may +be of arbitrarily large step.",4 +"abstract +a well known n p-hard problem called the generalized traveling salesman +problem (gtsp) is considered. in gtsp the nodes of a complete undirected +graph are partitioned into clusters. the objective is to find a minimum cost +tour passing through exactly one node from each cluster. +an exact exponential time algorithm and an effective meta-heuristic algorithm for the problem are presented. the meta-heuristic proposed is a +modified ant colony system (acs) algorithm called reinforcing ant colony +system (racs) which introduces new correction rules in the acs algorithm. +computational results are reported for many standard test problems. the +proposed algorithm is competitive with the other already proposed heuristics +for the gtsp in both solution quality and computational time.",9 +abstract,6 +"abstract +in this paper we present a general convex optimization approach for solving highdimensional multiple response tensor regression problems under low-dimensional structural assumptions. we consider using convex and weakly decomposable regularizers +assuming that the underlying tensor lies in an unknown low-dimensional subspace. +within our framework, we derive general risk bounds of the resulting estimate under +fairly general dependence structure among covariates. our framework leads to upper +bounds in terms of two very simple quantities, the gaussian width of a convex set in +tensor space and the intrinsic dimension of the low-dimensional tensor subspace. to +the best of our knowledge, this is the first general framework that applies to multiple response problems. these general bounds provide useful upper bounds on rates +of convergence for a number of fundamental statistical models of interest including +multi-response regression, vector auto-regressive models, low-rank tensor models and +pairwise interaction models. moreover, in many of these settings we prove that the +resulting estimates are minimax optimal. we also provide a numerical study that both +validates our theoretical guarantees and demonstrates the breadth of our framework. +∗",10 +"abstract-- for dynamic security assessment considering +uncertainties in grid operations, this paper proposes an approach +for time-domain simulation of a power system having stochastic +loads. the proposed approach solves a stochastic differential +equation model of the power system in a semi-analytical way using +the adomian decomposition method. the approach generates +semi-analytical solutions expressing both deterministic and +stochastic variables explicitly as symbolic variables so as to embed +stochastic processes directly into the solutions for efficient +simulation and analysis. the proposed approach is tested on the +new england 10-machine 39-bus system with different levels of +stochastic loads. the approach is also benchmarked with a +traditional stochastic simulation approach based on the eulermaruyama method. the results show that the new approach has +better time performance and a comparable accuracy. +index terms—adomian decomposition method, stochastic +differential equation, stochastic load, stochastic time-domain +simulation.",3 +"abstract +enhancing low resolution images via super-resolution +or image synthesis for cross-resolution face recognition +has been well studied. several image processing and machine learning paradigms have been explored for addressing the same. in this research, we propose synthesis via +deep sparse representation algorithm for synthesizing a +high resolution face image from a low resolution input image. the proposed algorithm learns multi-level sparse representation for both high and low resolution gallery images, +along with an identity aware dictionary and a transformation function between the two representations for face identification scenarios. with low resolution test data as input, the high resolution test image is synthesized using the +identity aware dictionary and transformation which is then +used for face recognition. the performance of the proposed +sdsr algorithm is evaluated on four databases, including +one real world dataset. experimental results and comparison with existing seven algorithms demonstrate the efficacy +of the proposed algorithm in terms of both face identification and image quality measures.",1 +"abstract. in this work we present a flexible tool for tumor progression, +which simulates the evolutionary dynamics of cancer. tumor progression +implements a multi-type branching process where the key parameters +are the fitness landscape, the mutation rate, and the average time of +cell division. the fitness of a cancer cell depends on the mutations it +has accumulated. the input to our tool could be any fitness landscape, +mutation rate, and cell division time, and the tool produces the growth +dynamics and all relevant statistics.",5 +"abstract +in coding theory, gray isometries are usually defined as mappings +between finite frobenius rings, which include the ring ℤ𝑚 of integers +modulo m and the finite fields. in this paper, we derive an isometric +mapping from ℤ8 to ℤ24 from the composition of the gray isometries on +ℤ8 and on ℤ24 . the image under this composition of a ℤ8 -linear block +code of length n with homogeneous distance d is a (not necessarily +linear) quaternary block code of length 2n with lee distance d.",7 +"abstract +many video processing algorithms rely on optical flow to +register different frames within a sequence. however, a precise estimation of optical flow is often neither tractable nor +optimal for a particular task. in this paper, we propose taskoriented flow (toflow), a flow representation tailored for +specific video processing tasks. we design a neural network +with a motion estimation component and a video processing component. these two parts can be jointly trained in a +self-supervised manner to facilitate learning of the proposed +toflow. we demonstrate that toflow outperforms the traditional optical flow on three different video processing tasks: +frame interpolation, video denoising/deblocking, and video +super-resolution. we also introduce vimeo-90k, a large-scale, +high-quality video dataset for video processing to better evaluate the proposed algorithm.",1 +"abstract— the new type of mobile ad hoc network which is called vehicular ad hoc networks (vanet) created a fertile +environment for research. +in this research, a protocol particle swarm optimization contention based broadcast (pcbb) is proposed, for fast and effective +dissemination of emergency messages within a geographical area to distribute the emergency message and achieve the safety +system, this research will help the vanet system to achieve its safety goals in intelligent and efficient way. +keywords- pso; vanet; message broadcasting; emergency system; safety system.",9 +"abstract +in this paper, we show that there is an o(log k log2 n)-competitive +randomized algorithm for the k-sever problem on any metric space +with n points, which improved the previous best competitive ratio +o(log2 k log3 n log log n) by nikhil bansal et al. (focs 2011, pages 267276). +keywords: k-sever problem; online algorithm; primal-dual method; +randomized algorithm;",8 +"abstract—this paper considers a smart grid cyber-security +problem analyzing the vulnerabilities of electric power networks +to false data attacks. the analysis problem is related to a +constrained cardinality minimization problem. the main result +shows that an l1 relaxation technique provides an exact optimal +solution to this cardinality minimization problem. the proposed +result is based on a polyhedral combinatorics argument. it is +different from well-known results based on mutual coherence +and restricted isometry property. the results are illustrated on +benchmarks including the ieee 118-bus and 300-bus systems.",5 +"abstract— here we propose using the successor representation +(sr) to accelerate learning in a constructive knowledge system +based on general value functions (gvfs). in real-world settings +like robotics for unstructured and dynamic environments, it is +infeasible to model all meaningful aspects of a system and its +environment by hand due to both complexity and size. instead, +robots must be capable of learning and adapting to changes in +their environment and task, incrementally constructing models +from their own experience. gvfs, taken from the field of +reinforcement learning (rl), are a way of modeling the world +as predictive questions. one approach to such models proposes +a massive network of interconnected and interdependent gvfs, +which are incrementally added over time. it is reasonable +to expect that new, incrementally added predictions can be +learned more swiftly if the learning process leverages knowledge +gained from past experience. the sr provides such a means +of separating the dynamics of the world from the prediction +targets and thus capturing regularities that can be reused across +multiple gvfs. as a primary contribution of this work, we show +that using sr-based predictions can improve sample efficiency +and learning speed in a continual learning setting where new +predictions are incrementally added and learned over time. we +analyze our approach in a grid-world and then demonstrate its +potential on data from a physical robot arm.",2 +"abstract +motivated by applications in declarative data analysis, we study datalog z —an extension of positive datalog with arithmetic functions over integers. this language is known to be undecidable, so +we propose two fragments. in limit datalog z predicates are axiomatised to keep minimal/maximal +numeric values, allowing us to show that fact entailment is co ne xp t ime-complete in combined, +and co np-complete in data complexity. moreover, +an additional stability requirement causes the complexity to drop to e xp t ime and pt ime, respectively. finally, we show that stable datalog z can +express many useful data analysis tasks, and so our +results provide a sound foundation for the development of advanced information systems.",2 +"abstract—compute and forward (cf) is a promising relaying scheme which, instead of decoding single messages or +forwarding/amplifying information at the relay, decodes linear +combinations of the simultaneously transmitted messages. the +current literature includes several coding schemes and results +on the degrees of freedom in cf, yet for systems with a fixed +number of transmitters and receivers. it is unclear, however, how +cf behaves at the limit of a large number of transmitters. +in this paper, we investigate the performance of cf in that +regime. specifically, we show that as the number of transmitters +grows, cf becomes degenerated, in the sense that a relay prefers +to decode only one (strongest) user instead of any other linear +combination of the transmitted codewords, treating the other +users as noise. moreover, the sum-rate tends to zero as well. this +makes scheduling necessary in order to maintain the superior +abilities cf provides. indeed, under scheduling, we show that +non-trivial linear combinations are chosen, and the sum-rate does +not decay, even without state information at the transmitters and +without interference alignment.",7 +"abstract—multi-frame image super-resolution (misr) aims +to fuse information in low-resolution (lr) image sequence to +compose a high-resolution (hr) one, which is applied extensively in many areas recently. different with single image super-resolution (sisr), sub-pixel transitions between multiple +frames introduce additional information, attaching more significance to fusion operator to alleviate the ill-posedness of +misr. for reconstruction-based approaches, the inevitable +projection of reconstruction errors from lr space to hr +space is commonly tackled by an interpolation operator, +however crude interpolation may not fit the natural image +and generate annoying blurring artifacts, especially after fusion operator. in this paper, we propose an end-to-end fast +upscaling technique to replace the interpolation operator, +design upscaling filters in lr space for periodic sub-locations +respectively and shuffle the filter results to derive the final +reconstruction errors in hr space. the proposed fast upscaling technique not only reduce the computational complexity of +the upscaling operation by utilizing shuffling operation to +avoid complex operation in hr space, but also realize superior performance with fewer blurring artifacts. extensive experimental results demonstrate the effectiveness and efficiency +of the proposed technique, whilst, combining the proposed +technique with bilateral total variation (btv) regularization, +the misr approach outperforms state-of-the-art methods. +index terms—multi-frame super-resolution, upscaling +technique, bilateral total variation, shuffling operation",1 +"abstract. let m be a module over a commutative ring r. in this paper, +we continue our study of annihilating-submodule graph ag(m ) which was +introduced in (the zariski topology-graph of modules over commutative rings, +comm. algebra., 42 (2014), 3283–3296). ag(m ) is a (undirected) graph in +which a nonzero submodule n of m is a vertex if and only if there exists +a nonzero proper submodule k of m such that n k = (0), where n k, the +product of n and k, is defined by (n : m )(k : m )m and two distinct vertices +n and k are adjacent if and only if n k = (0). we prove that if ag(m ) is a +tree, then either ag(m ) is a star graph or a path of order 4 and in the latter +case m ∼ += f × s, where f is a simple module and s is a module with a unique +non-trivial submodule. moreover, we prove that if m is a cyclic module with +at least three minimal prime submodules, then gr(ag(m )) = 3 and for every +cyclic module m , cl(ag(m )) ≥ |m in(m )|.",0 +"abstract. this paper studies nonparametric series estimation and inference +for the effect of a single variable of interest x on an outcome y in the presence of potentially high-dimensional conditioning variables z. the context is +an additively separable model e[y|x, z] = g0 (x) + h0 (z). the model is highdimensional in the sense that the series of approximating functions for h0 (z) +can have more terms than the sample size, thereby allowing z to have potentially very many measured characteristics. the model is required to be +approximately sparse: h0 (z) can be approximated using only a small subset +of series terms whose identities are unknown. this paper proposes an estimation and inference method for g0 (x) called post-nonparametric double +selection which is a generalization of post-double selection. standard rates +of convergence and asymptotic normality for the estimator are shown to hold +uniformly over a large class of sparse data generating processes. a simulation +study illustrates finite sample estimation properties of the proposed estimator +and coverage properties of the corresponding confidence intervals. finally, an +empirical application estimating convergence in gdp in a country-level crosssection demonstrates the practical implementation of the proposed method. +key words: additive nonparametric models, high-dimensional sparse regression, inference under imperfect model selection. jel codes: c1.",10 +"abstract +we investigate the asymptotic distributions of coordinates of regression m-estimates +in the moderate p/n regime, where the number of covariates p grows proportionally with the sample size n. under appropriate regularity conditions, we establish the coordinate-wise asymptotic normality of regression m-estimates assuming +a fixed-design matrix. our proof is based on the second-order poincaré inequality +(chatterjee, 2009) and leave-one-out analysis (el karoui et al., 2011). some relevant +examples are indicated to show that our regularity conditions are satisfied by a broad +class of design matrices. we also show a counterexample, namely the anova-type +design, to emphasize that the technical assumptions are not just artifacts of the +proof. finally, the numerical experiments confirm and complement our theoretical +results.",10 +"abstract +in this paper we provide the complete classification of kleinian +group of hausdorff dimensions less than 1. in particular, we prove +that every purely loxodromic kleinian groups of hausdorff dimension +< 1 is a classical schottky group. this upper bound is sharp. as an +application, the result of [4] then implies that every closed riemann +surface is uniformizable by a classical schottky group. the proof relies +on the result of hou [6], and space of rectifiable γ-invariant closed +curves.",4 +"abstract +string searching consists in locating a substring in a longer text, and two strings can be +approximately equal (various similarity measures such as the hamming distance exist). +strings can be defined very broadly, and they usually contain natural language and +biological data (dna, proteins), but they can also represent other kinds of data such as +music or images. +one solution to string searching is to use online algorithms which do not preprocess +the input text, however, this is often infeasible due to the massive sizes of modern data +sets. alternatively, one can build an index, i.e. a data structure which aims to speed up +string matching queries. the indexes are divided into full-text ones which operate on +the whole input text and can answer arbitrary queries and keyword indexes which store +a dictionary of individual words. in this work, we present a literature review for both +index categories as well as our contributions (which are mostly practice-oriented). +the first contribution is the fm-bloated index, which is a modification of the well-known +fm-index (a compressed, full-text index) that trades space for speed. in our approach, +the count table and the occurrence lists store information about selected q-grams in +addition to the individual characters. two variants are described, namely one using +o(n log2 n) bits of space with o(m + log m log log n) average query time, and one with +linear space and o(m log log n) average query time, where n is the input text length +and m is the pattern length. we experimentally show that a significant speedup can be +achieved by operating on q-grams (albeit at the cost of very high space requirements, +hence the name “bloated”). +in the category of keyword indexes we present the so-called split index, which can efficiently solve the k-mismatches problem, especially for 1 error. our implementation in the +c++ language is focused mostly on data compaction, which is beneficial for the search +speed (by being cache friendly). we compare our solution with other algorithms and +we show that it is faster when the hamming distance is used. query times in the order +of 1 microsecond were reported for one mismatch for a few-megabyte natural language +dictionary on a medium-end pc. +a minor contribution includes string sketches which aim to speed up approximate string +comparison at the cost of additional space (o(1) per string). they can be used in +the context of keyword indexes in order to deduce that two strings differ by at least k +mismatches with the use of fast bitwise operations rather than an explicit verification.",8 +"abstract. this is an exposition on the general neron desingularization and its +applications. we end with a recent constructive form of this desingularization in +dimension one. +key words : artin approximation, neron desingularization, bass-quillen conjecture, quillen’s question, smooth morphisms, regular morphisms, smoothing ring +morphisms. +2010 mathematics subject classification: primary 1302, secondary 13b40, 13h05, +13h10, 13j05, 13j10, 13j15, 14b07, 14b12, 14b25.",0 +"abstract. an axiomatic characterization of buildings of type c3 due to tits is used to prove +that any cohomogeneity two polar action of type c3 on a positively curved simply connected +manifold is equivariantly diffeomorphic to a polar action on a rank one symmetric space. this +includes two actions on the cayley plane whose associated c3 type geometry is not covered by +a building.",4 +"abstract—this paper introduces the time synchronization attack rejection and mitigation (tsarm) technique for +time synchronization attacks (tsas) over the global positioning system (gps). the technique estimates the clock +bias and drift of the gps receiver along with the possible +attack contrary to previous approaches. having estimated +the time instants of the attack, the clock bias and drift of +the receiver are corrected. the proposed technique is computationally efficient and can be easily implemented in real +time, in a fashion complementary to standard algorithms +for position, velocity, and time estimation in off-the-shelf +receivers. the performance of this technique is evaluated +on a set of collected data from a real gps receiver. our +method renders excellent time recovery consistent with the +application requirements. the numerical results demonstrate that the tsarm technique outperforms competing +approaches in the literature. +index terms—global positioning system, time synchronization attack, spoofing detection",3 +"abstract—we show that the spectral efficiency of a direct +detection transmission system is at most 1 bit/s/hz less than the +spectral efficiency of a system employing coherent detection with +the same modulation format. correspondingly, the capacity per +complex degree of freedom in systems using direct detection is +lower by at most 1 bit.",7 +"abstract +three dimensional digital model of a representative human kidney is needed +for a surgical simulator that is capable of simulating a laparoscopic surgery +involving kidney. buying a three dimensional computer model of a +representative human kidney, or reconstructing a human kidney from an +image sequence using commercial software, both involve (sometimes +significant amount of) money. in this paper, author has shown that one can +obtain a three dimensional surface model of human kidney by making use of +images from the visible human data set and a few free software packages +(imagej, itk-snap, and meshlab in particular). images from the visible +human data set, and the software packages used here, both do not cost +anything. hence, the practice of extracting the geometry of a representative +human kidney for free, as illustrated in the present work, could be a free +alternative to the use of expensive commercial software or to the purchase of a +digital model. +keywords +visible; human; data; set; kidney; surface; model; free.",5 +"abstract. we show that jn , the stanley-reisner ideal of the n-cycle, has a free resolution +supported on the (n − 3)-dimensional simplicial associahedron an . this resolution is not +minimal for n ≥ 6; in this case the betti numbers of jn are strictly smaller than the f -vector +of an . we show that in fact the betti numbers βd of jn are in bijection with the number +of standard young tableaux of shape (d + 1, 2, 1n−d−3 ). this complements the fact that +the number of (d − 1)-dimensional faces of an are given by the number of standard young +tableaux of (super)shape (d + 1, d + 1, 1n−d−3 ); a bijective proof of this result was first +provided by stanley. an application of discrete morse theory yields a cellular resolution of +jn that we show is minimal at the first syzygy. we furthermore exhibit a simple involution +on the set of associahedron tableaux with fixed points given by the betti tableaux, suggesting +a morse matching and in particular a poset structure on these objects.",0 +"abstract +a direct adaptive feedforward control method for tracking repeatable runout (rro) in bit patterned media recording +(bpmr) hard disk drives (hdd) is proposed. the technique estimates the system parameters and the residual rro simultaneously and constructs a feedforward signal based on a known regressor. an improved version of the proposed algorithm to avoid +matrix inversion and reduce computation complexity is given. +results for both matlab simulation and digital signal processor (dsp) implementation are provided to verify the effectiveness +of the proposed algorithm. +1",3 +"abstract. websites today routinely combine javascript from multiple sources, both trusted and untrusted. hence, javascript security is +of paramount importance. a specific interesting problem is information +flow control (ifc) for javascript. in this paper, we develop, formalize +and implement a dynamic ifc mechanism for the javascript engine of a +production web browser (specifically, safari’s webkit engine). our ifc +mechanism works at the level of javascript bytecode and hence leverages years of industrial effort on optimizing both the source to bytecode +compiler and the bytecode interpreter. we track both explicit and implicit flows and observe only moderate overhead. working with bytecode +results in new challenges including the extensive use of unstructured +control flow in bytecode (which complicates lowering of program context +taints), unstructured exceptions (which complicate the matter further) +and the need to make ifc analysis permissive. we explain how we address these challenges, formally model the javascript bytecode semantics +and our instrumentation, prove the standard property of terminationinsensitive non-interference, and present experimental results on an optimized prototype. +keywords: dynamic information flow control, javascript bytecode, taint +tracking, control flow graphs, immediate post-dominator analysis",6 +"abstract—this paper studies the solution of joint energy +storage (es) ownership sharing between multiple shared facility +controllers (sfcs) and those dwelling in a residential community. +the main objective is to enable the residential units (rus) to +decide on the fraction of their es capacity that they want to +share with the sfcs of the community in order to assist them +storing electricity, e.g., for fulfilling the demand of various shared +facilities. to this end, a modified auction-based mechanism is +designed that captures the interaction between the sfcs and +the rus so as to determine the auction price and the allocation +of es shared by the rus that governs the proposed joint es +ownership. the fraction of the capacity of the storage that each +ru decides to put into the market to share with the sfcs and +the auction price are determined by a noncooperative stackelberg +game formulated between the rus and the auctioneer. it is shown +that the proposed auction possesses the incentive compatibility +and the individual rationality properties, which are leveraged via +the unique stackelberg equilibrium (se) solution of the game. +numerical experiments are provided to confirm the effectiveness +of the proposed scheme. +index terms—smart grid, shared energy storage, auction +theory, stackelberg equilibrium, strategy-proof, incentive compatibility.",3 +"abstract. for a homogeneous polynomial with a non-zero discriminant, we +interpret direct sum decomposability of the polynomial in terms of factorization properties of the macaulay inverse system of its milnor algebra. this +leads to an if-and-only-if criterion for direct sum decomposability of such a +polynomial, and to an algorithm for computing direct sum decompositions +over any field, either of characteristic 0 or of sufficiently large positive characteristic, for which polynomial factorization algorithms exist. we also give +simple necessary criteria for direct sum decomposability of arbitrary homogeneous polynomials over arbitrary fields and apply them to prove that many +interesting classes of homogeneous polynomials are not direct sums.",0 +"abstract +deep learning on graphs has become a popular +research topic with many applications. however, +past work has concentrated on learning graph embedding tasks, which is in contrast with advances +in generative models for images and text. is it +possible to transfer this progress to the domain of +graphs? we propose to sidestep hurdles associated with linearization of such discrete structures +by having a decoder output a probabilistic fullyconnected graph of a predefined maximum size +directly at once. our method is formulated as +a variational autoencoder. we evaluate on the +challenging task of molecule generation.",9 +"abstract. a single qubit may be represented on the bloch sphere or +similarly on the 3-sphere s 3 . our goal is to dress this correspondence by +converting the language of universal quantum computing (uqc) to that +of 3-manifolds. a magic state and the pauli group acting on it define a +model of uqc as a povm that one recognizes to be a 3-manifold m 3 . e. +g., the d-dimensional povms defined from subgroups of finite index of +the modular group p sl(2, z) correspond to d-fold m 3 - coverings over +the trefoil knot. in this paper, one also investigates quantum information +on a few ‘universal’ knots and links such as the figure-of-eight knot, +the whitehead link and borromean rings , making use of the catalog +of platonic manifolds available on snappy [4] . further connections +between povms based uqc and m 3 ’s obtained from dehn fillings are +explored. +pacs: 03.67.lx, 03.65.wj, 03.65.aa, 02.20.-a, 02.10.kn, 02.40.pc, 02.40.sf +msc codes: 81p68, 81p50, 57m25, 57r65, 14h30, 20e05, 57m12 +keywords: quantum computation, ic-povms, knot theory, three-manifolds, branch +coverings, dehn surgeries.",4 +"abstract +we present an accurate and efficient discretization approach for +the adaptive discretization of typical model equations employed in +numerical weather prediction. a semi-lagrangian approach is combined with the tr-bdf2 semi-implicit time discretization method +and with a spatial discretization based on adaptive discontinuous finite elements. the resulting method has full second order accuracy +in time and can employ polynomial bases of arbitrarily high degree in +space, is unconditionally stable and can effectively adapt the number +of degrees of freedom employed in each element, in order to balance accuracy and computational cost. the p−adaptivity approach employed +does not require remeshing, therefore it is especially suitable for applications, such as numerical weather prediction, in which a large number +of physical quantities are associated with a given mesh. furthermore, +although the proposed method can be implemented on arbitrary unstructured and nonconforming meshes, even its application on simple +cartesian meshes in spherical coordinates can cure effectively the pole +problem by reducing the polynomial degree used in the polar elements. +numerical simulations of classical benchmarks for the shallow water +and for the fully compressible euler equations validate the method +and demonstrate its capability to achieve accurate results also at large +courant numbers, with time steps up to 100 times larger than those +of typical explicit discretizations of the same problems, while reducing +the computational cost thanks to the adaptivity algorithm.",5 +"abstract +urban rail transit often operates with high service frequencies to serve heavy passenger demand +during rush hours. such operations can be delayed by train congestion, passenger congestion, and +the interaction of the two. delays are problematic for many transit systems, as they become +amplified by this interactive feedback. however, there are no tractable models to describe +transit systems with dynamical delays, making it difficult to analyze the management strategies of +congested transit systems in general, solvable ways. to fill this gap, this article proposes simple yet +physical and dynamic models of urban rail transit. first, a fundamental diagram of a transit system +(3-dimensional relation among train-flow, train-density, and passenger-flow) is analytically derived +by considering the physical interactions in delays and congestion based on microscopic operation +principles. then, a macroscopic model of a transit system with time-varying demand and supply is +developed as a continuous approximation based on the fundamental diagram. finally, the accuracy +of the macroscopic model is investigated using a microscopic simulation, and applicable range of +the model is confirmed.",3 +"abstract. in this paper, we formulate an analogue of waring’s problem for an algebraic group g. at the field level we consider a morphism +of varieties f : a1 → g and ask whether every element of g(k) is the +product of a bounded number of elements f (a1 (k)) = f (k). we give +an affirmative answer when g is unipotent and k is a characteristic zero +field which is not formally real. +the idea is the same at the integral level, except one must work with +schemes, and the question is whether every element in a finite index +subgroup of g(o) can be written as a product of a bounded number of +elements of f (o). we prove this is the case when g is unipotent and o +is the ring of integers of a totally imaginary number field.",4 +"abstract +we study the following multiagent variant of the knapsack problem. we are given a +set of items, a set of voters, and a value of the budget; each item is endowed with a cost +and each voter assigns to each item a certain value. the goal is to select a subset of items +with the total cost not exceeding the budget, in a way that is consistent with the voters’ +preferences. since the preferences of the voters over the items can vary significantly, +we need a way of aggregating these preferences, in order to select the socially most +preferred valid knapsack. we study three approaches to aggregating voters preferences, +which are motivated by the literature on multiwinner elections and fair allocation. this +way we introduce the concepts of individually best, diverse, and fair knapsack. we study +computational complexity (including parameterized complexity, and complexity under +restricted domains) of computing the aforementioned concepts of multiagent knapsacks.",8 +"abstract. we describe triples and systems, expounded as an axiomatic algebraic umbrella theory for +classical algebra, tropical algebra, hyperfields, and fuzzy rings.",0 +"abstract +in this paper, random forests are proposed for operating devices diagnostics in the presence of a variable number of features. in various +contexts, like large or difficult-to-access monitored areas, wired sensor +networks providing features to achieve diagnostics are either very costly +to use or totally impossible to spread out. using a wireless sensor network +can solve this problem, but this latter is more subjected to flaws. furthermore, the networks’ topology often changes, leading to a variability +in quality of coverage in the targeted area. diagnostics at the sink level +must take into consideration that both the number and the quality of the +provided features are not constant, and that some politics like scheduling +or data aggregation may be developed across the network. the aim of +this article is (1) to show that random forests are relevant in this context, +due to their flexibility and robustness, and (2) to provide first examples +of use of this method for diagnostics based on data provided by a wireless +sensor network.",2 +"abstract +deep convolutional neural networks (cnns) are more powerful +than deep neural networks (dnn), as they are able to better reduce +spectral variation in the input signal. this has also been confirmed +experimentally, with cnns showing improvements in word error +rate (wer) between 4-12% relative compared to dnns across a variety of lvcsr tasks. in this paper, we describe different methods +to further improve cnn performance. first, we conduct a deep analysis comparing limited weight sharing and full weight sharing with +state-of-the-art features. second, we apply various pooling strategies that have shown improvements in computer vision to an lvcsr +speech task. third, we introduce a method to effectively incorporate +speaker adaptation, namely fmllr, into log-mel features. fourth, +we introduce an effective strategy to use dropout during hessian-free +sequence training. we find that with these improvements, particularly with fmllr and dropout, we are able to achieve an additional +2-3% relative improvement in wer on a 50-hour broadcast news +task over our previous best cnn baseline. on a larger 400-hour +bn task, we find an additional 4-5% relative improvement over our +previous best cnn baseline. +1. introduction +deep neural networks (dnns) are now the state-of-the-art in acoustic modeling for speech recognition, showing tremendous improvements on the order of 10-30% relative across a variety of small and +large vocabulary tasks [1]. recently, deep convolutional neural networks (cnns) [2, 3] have been explored as an alternative type of +neural network which can reduce translational variance in the input +signal. for example, in [4], deep cnns were shown to offer a 4-12% +relative improvement over dnns across different lvcsr tasks. the +cnn architecture proposed in [4] was a somewhat vanilla architecture that had been used in computer vision for many years. the goal +of this paper is to analyze and justify what is an appropriate cnn architecture for speech, and to investigate various strategies to improve +cnn results further. +first, the architecture proposed in [4] used multiple convolutional layers with full weight sharing (fws), which was found to be +beneficial compared to a single fws convolutional layer. because +the locality of speech is known ahead of time, [3] proposed the use +of limited weight sharing (lws) for cnns in speech. while lws +has the benefit that it allows each local weight to focus on parts of +the signal which are most confusable, previous work with lws had +just focused on a single lws layer [3], [5]. in this work, we do a +detailed analysis and compare multiple layers of fws and lws.",9 +"abstract— this paper presents a practical approach for +identifying unknown mechanical parameters, such as mass +and friction models of manipulated rigid objects or actuated +robotic links, in a succinct manner that aims to improve the +performance of policy search algorithms. key features of this +approach are the use of off-the-shelf physics engines and the +adaptation of a black-box bayesian optimization framework +for this purpose. the physics engine is used to reproduce in +simulation experiments that are performed on a real robot, +and the mechanical parameters of the simulated system are +automatically fine-tuned so that the simulated trajectories +match with the real ones. the optimized model is then used for +learning a policy in simulation, before safely deploying it on the +real robot. given the well-known limitations of physics engines +in modeling real-world objects, it is generally not possible to +find a mechanical model that reproduces in simulation the real +trajectories exactly. moreover, there are many scenarios where +a near-optimal policy can be found without having a perfect +knowledge of the system. therefore, searching for a perfect +model may not be worth the computational effort in practice. +the proposed approach aims then to identify a model that +is good enough to approximate the value of a locally optimal +policy with a certain confidence, instead of spending all the +computational resources on searching for the most accurate +model. empirical evaluations, performed in simulation and on +a real robotic manipulation task, show that model identification +via physics engines can significantly boost the performance of +policy search algorithms that are popular in robotics, such as +trpo, power and pilco, with no additional real-world data.",2 +"abstract +prior to the financial crisis mortgage securitization models increased in sophistication as did products +built to insure against losses. layers of complexity formed upon a foundation that could not support +it and as the foundation crumbled the housing market followed. that foundation was the gaussian +copula which failed to correctly model failure-time correlations of derivative securities in duress. in +retirement, surveys suggest the greatest fear is running out of money and as retirement decumulation +models become increasingly sophisticated, large financial firms and robo-advisors may guarantee +their success. similar to an investment bank failure the event of retirement ruin is driven by outliers +and correlations in times of stress. it would be desirable to have a foundation able to support the +increased complexity before it forms however the industry currently relies upon similar gaussian (or +lognormal) dependence structures. we propose a multivariate density model having fixed marginals +that is tractable and fits data which are skewed, heavy-tailed, multimodal, i.e., of arbitrary complexity +allowing for a rich correlation structure. it is also ideal for stress-testing a retirement plan by fitting +historical data seeded with black swan events. a preliminary section reviews all concepts before they +are used and fully documented c/c++ source code is attached making the research self-contained. +lastly, we take the opportunity to challenge existing retirement finance dogma and also review some +recent criticisms of retirement ruin probabilities and their suggested replacement metrics. +table of contents +introduction ............................................................................................................................................ 1 +i. literature review ............................................................................................................................. 2 +ii. preliminaries.................................................................................................................................... 3 +iii. univariate density modeling ....................................................................................................... 29 +iv. multivariate density modeling w/out covariances ..................................................................... 37 +v. multivariate density modeling w/covariances ............................................................................ 40 +vi. expense-adjusted real compounding return on a diversified portfolio .................................. 47 +vii. retirement portfolio optimization ............................................................................................. 49 +viii. conclusion ................................................................................................................................ 51 +references ............................................................................................................................................ 52 +data sources/retirement surveys........................................................................................................ 55 +ix. appendix with source code ........................................................................................................ 56 +keywords: variance components, em algorithm, ecme algorithm, maximum likelihood, pdf, +cdf, information criteria, finite mixture model, constrained optimization, retirement decumulation, +probability of ruin, static/dynamic glidepaths, financial crisis +contact: cjr5@njit.edu",5 +"abstract +we construct new examples of cat(0) groups containing non finitely +presented subgroups that are of type f p2 , these cat(0) groups do not +contain copies of z3 . we also give a construction of groups which are of +type fn but not fn`1 with no free abelian subgroups of rank greater than +r n3 s.",4 +"abstract— in this paper, we propose an automated computer +platform for the purpose of classifying electroencephalography +(eeg) signals associated with left and right hand movements +using a hybrid system that uses advanced feature extraction +techniques and machine learning algorithms. it is known that +eeg represents the brain activity by the electrical voltage +fluctuations along the scalp, and brain-computer interface (bci) +is a device that enables the use of the brain’s neural activity to +communicate with others or to control machines, artificial limbs, +or robots without direct physical movements. in our research +work, we aspired to find the best feature extraction method that +enables the differentiation between left and right executed fist +movements through various classification algorithms. the eeg +dataset used in this research was created and contributed to +physionet by the developers of the bci2000 instrumentation +system. data was preprocessed using the eeglab matlab +toolbox and artifacts removal was done using aar. data was +epoched on the basis of event-related (de) synchronization +(erd/ers) and movement-related cortical potentials (mrcp) +features. mu/beta rhythms were isolated for the erd/ers +analysis and delta rhythms were isolated for the mrcp analysis. +the independent component analysis (ica) spatial filter was +applied on related channels for noise reduction and isolation of +both artifactually and neutrally generated eeg sources. the +final feature vector included the erd, ers, and mrcp features +in addition to the mean, power and energy of the activations of +the resulting independent components (ics) of the epoched +feature datasets. the datasets were inputted into two machinelearning algorithms: neural networks (nns) and support vector +machines (svms). intensive experiments were carried out and +optimum classification performances of 89.8 and 97.1 were +obtained using nn and svm, respectively. this research shows +that this method of feature extraction holds some promise for the +classification of various pairs of motor movements, which can be +used in a bci context to mentally control a computer or machine. +keywords—eeg; bci; ica; mrcp; erd/ers; machine +learning; nn; svm",9 +"abstract +we prove a new and general concentration inequality for the excess risk in least-squares regression +with random design and heteroscedastic noise. no specific structure is required on the model, except the +existence of a suitable function that controls the local suprema of the empirical process. so far, only the +case of linear contrast estimation was tackled in the literature with this level of generality on the model. +we solve here the case of a quadratic contrast, by separating the behavior of a linearized empirical process +and the empirical process driven by the squares of functions of models. +keywords: regression, least-squares, excess risk, empirical process, concentration inequality, margin +relation. +ams2000 : 62g08, 62j02, 60e15.",10 +"abstract. let x be a building, identified with its davis realisation. in this +paper, we provide for each x ∈ x and each η in the visual boundary ∂x of +x a description of the geodesic ray bundle geo(x, η), namely, of the union of +all combinatorial geodesic rays (corresponding to infinite minimal galleries in +the chamber graph of x) starting from x and pointing towards η. when x is +locally finite and hyperbolic, we show that the symmetric difference between +geo(x, η) and geo(y, η) is always finite, for x, y ∈ x and η ∈ ∂x. this gives +a positive answer to a question of huang, sabok and shinko in the setting of +buildings. combining their results with a construction of bourdon, we obtain +examples of hyperbolic groups g with kazhdan’s property (t) such that the +g-action on its gromov boundary is hyperfinite.",4 +"abstract. a feature-oriented product line is a family of programs that share a +common set of features. a feature implements a stakeholder’s requirement, represents a design decision and configuration option and, when added to a program, +involves the introduction of new structures, such as classes and methods, and the +refinement of existing ones, such as extending methods. with feature-oriented +decomposition, programs can be generated, solely on the basis of a user’s selection of features, by the composition of the corresponding feature code. a key +challenge of feature-oriented product line engineering is how to guarantee the +correctness of an entire feature-oriented product line, i.e., of all of the member +programs generated from different combinations of features. as the number of +valid feature combinations grows progressively with the number of features, it +is not feasible to check all individual programs. the only feasible approach is +to have a type system check the entire code base of the feature-oriented product +line. we have developed such a type system on the basis of a formal model of a +feature-oriented java-like language. we demonstrate that the type system ensures +that every valid program of a feature-oriented product line is well-typed and that +the type system is complete.",6 +"abstract. a multigraph is a nonsimple graph which is permitted to have +multiple edges, that is, edges that have the same end nodes. we introduce +the concept of spanning simplicial complexes ∆s (g) of multigraphs g, which +provides a generalization of spanning simplicial complexes of associated +simple graphs. we give first the characterization of all spanning trees of a +r +uni-cyclic multigraph un,m +with n edges including r multiple edges within +and outside the cycle of length m. then, we determine the facet ideal +r +r +if (∆s (un,m +)) of spanning simplicial complex ∆s (un,m +) and its primary +decomposition. the euler characteristic is a well-known topological and +homotopic invariant to classify surfaces. finally, we device a formula for +r +euler characteristic of spanning simplicial complex ∆s (un,m +). +key words: multigraph, spanning simplicial complex, euler characteristic. +2010 mathematics subject classification: primary 05e25, 55u10, 13p10, +secondary 06a11, 13h10.",0 +"abstract. it has been conjectured by eisenbud, green and harris that if i is +a homogeneous ideal in k[x1 , . . . , xn ] containing a regular sequence f1 , . . . , fn +of degrees deg(fi ) = ai , where 2 ≤ a1 ≤ ⋯ ≤ an , then there is a homogeneous +an +1 +ideal j containing xa +1 , . . . , xn with the same hilbert function. in this paper we prove the eisenbud-green-harris conjecture when fi splits into linear +factors for all i.",0 +"abstract. a group is tubular if it acts on a tree with z2 vertex stabilizers and +z edge stabilizers. we prove that a tubular group is virtually special if and only +if it acts freely on a locally finite cat(0) cube complex. furthermore, we prove +that if a tubular group acts freely on a finite dimensional cat(0) cube complex, +then it virtually acts freely on a three dimensional cat(0) cube complex.",4 +"abstract—successful fine-grained image classification methods +learn subtle details between visually similar (sub-)classes, but +the problem becomes significantly more challenging if the details +are missing due to low resolution. encouraged by the recent +success of convolutional neural network (cnn) architectures +in image classification, we propose a novel resolution-aware +deep model which combines convolutional image super-resolution +and convolutional fine-grained classification into a single model +in an end-to-end manner. extensive experiments on multiple +benchmarks demonstrate that the proposed model consistently +performs better than conventional convolutional networks on +classifying fine-grained object classes in low-resolution images. +index terms—fine-grained image classification, super resolution convoluational neural networks, deep learning",1 +"abstract +this paper explains genetic algorithm for novice in this field. basic philosophy of genetic +algorithm and its flowchart are described. step by step numerical computation of genetic +algorithm for solving simple mathematical equality problem will be briefly explained.",9 +"abstract. we construct 2-generator non-hopfian groups gm , m = 3, 4, 5, . . . , +where each gm has a specific presentation gm = ha, b | urm,0 = urm,1 = urm,2 = +· · · = 1i which satisfies small cancellation conditions c(4) and t (4). here, urm,i +is the single relator of the upper presentation of the 2-bridge link group of slope +rm,i , where rm,0 = [m + 1, m, m] and rm,i = [m + 1, m − 1, (i − 1)hmi, m + 1, m] +in continued fraction expansion for every integer i ≥ 1.",4 +"abstract—we propose an energy-efficient procedure for +transponder configuration in fmf-based elastic optical networks +in which quality of service and physical constraints are guaranteed and joint optimization of transmit optical power, temporal, +spatial and spectral variables are addressed. we use geometric +convexification techniques to provide convex representations for +quality of service, transponder power consumption and transponder configuration problem. simulation results show that our +convex formulation is considerably faster than its mixed-integer +nonlinear counterpart and its ability to optimize transmit optical +power reduces total transponder power consumption up to 32%. +we also analyze the effect of mode coupling and number of +available modes on power consumption of different network +elements. +keywords—convex optimization, green communication, elastic +optical networks, few-mode fibers, mode coupling.",7 +"abstract—in this paper, a new video classification +methodology is proposed which can be applied in both first and +third person videos. the main idea behind the proposed +strategy is to capture complementary information of +appearance and motion efficiently by performing two +independent streams on the videos. the first stream is aimed to +capture long-term motions from shorter ones by keeping track +of how elements in optical flow images have changed over time. +optical flow images are described by pre-trained networks +that have been trained on large scale image datasets. a set of +multi-channel time series are obtained by aligning descriptions +beside each other. for extracting motion features from these +time series, pot representation method plus a novel pooling +operator is followed due to several advantages. the second +stream is accomplished to extract appearance features which +are vital in the case of video classification. the proposed +method has been evaluated on both first and third-person +datasets and results present that the proposed methodology +reaches the state of the art successfully.",1 +"abstract—assignment of critical missions to unmanned aerial +vehicles (uav) is bound to widen the grounds for adversarial intentions in the cyber domain, potentially ranging from +disruption of command and control links to capture and use +of airborne nodes for kinetic attacks. ensuring the security +of electronic and communications in multi-uav systems is of +paramount importance for their safe and reliable integration +with military and civilian airspaces. over the past decade, this +active field of research has produced many notable studies and +novel proposals for attacks and mitigation techniques in uav +networks. yet, the generic modeling of such networks as typical +manets and isolated systems has left various vulnerabilities out +of the investigative focus of the research community. this paper +aims to emphasize on some of the critical challenges in securing +uav networks against attacks targeting vulnerabilities specific to +such systems and their cyber-physical aspects. +index terms—uav, cyber-physical security, vulnerabilities",3 +abstract,7 +"abstract +the triad census is an important approach to understand local structure in network +science, providing comprehensive assessments of the observed relational configurations between triples of actors in a network. however, researchers are often interested +in combinations of relational and categorical nodal attributes. in this case, it is desirable to account for the label, or color, of the nodes in the triad census. in this paper, +we describe an efficient algorithm for constructing the colored triad census, based, in +part, on existing methods for the classic triad census. we evaluate the performance +of the algorithm using empirical and simulated data for both undirected and directed +graphs. the results of the simulation demonstrate that the proposed algorithm reduces computational time by approximately 17,400% over the naı̈ve approach. we +also apply the colored triad census to the zachary karate club network dataset. we +simultaneously show the efficiency of the algorithm, and a way to conduct a statistical test on the census by forming a null distribution from 1, 000 realizations of a +mixing-matrix conditioned graph and comparing the observed colored triad counts +to the expected. from this, we demonstrate the method’s utility in our discussion +of results about homophily, heterophily, and bridging, simultaneously gained via the +colored triad census. in sum, the proposed algorithm for the colored triad census +brings novel utility to social network analysis in an efficient package. +keywords: triad census, labeled graphs, simulation +1. introduction +the triad census is an important approach towards understanding local network +structure. ? ] first presented the 16 isomorphism classes of structurally unique triads +preprint submitted to xxx",8 +"abstract +we propose several sampling architectures for the efficient acquisition of an ensemble of correlated +signals. we show that without prior knowledge of the correlation structure, each of our architectures +(under different sets of assumptions) can acquire the ensemble at a sub-nyquist rate. prior to sampling, +the analog signals are diversified using simple, implementable components. the diversification is achieved +by injecting types of “structured randomness” into the ensemble, the result of which is subsampled. +for reconstruction, the ensemble is modeled as a low-rank matrix that we have observed through an +(undetermined) set of linear equations. our main results show that this matrix can be recovered using +a convex program when the total number of samples is on the order of the intrinsic degree of freedom of +the ensemble — the more heavily correlated the ensemble, the fewer samples are needed. +to motivate this study, we discuss how such ensembles arise in the context of array processing.",7 +"abstract. the extraction of fibers from dmri data typically produces a +large number of fibers, it is common to group fibers into bundles. to this +end, many specialized distance measures, such as mcp, have been used +for fiber similarity. however, these distance based approaches require +point-wise correspondence and focus only on the geometry of the fibers. +recent publications have highlighted that using microstructure measures +along fibers improves tractography analysis. also, many neurodegenerative diseases impacting white matter require the study of microstructure +measures as well as the white matter geometry. motivated by these, we +propose to use a novel computational model for fibers, called functional +varifolds, characterized by a metric that considers both the geometry +and microstructure measure (e.g. gfa) along the fiber pathway. we use +it to cluster fibers with a dictionary learning and sparse coding-based +framework, and present a preliminary analysis using hcp data.",1 +"abstract +the slower is faster (sif) effect occurs when a system performs worse as its components try to do better. thus, a moderate individual efficiency actually leads to a +better systemic performance. the sif effect takes place in a variety of phenomena. +we review studies and examples of the sif effect in pedestrian dynamics, vehicle traffic, traffic light control, logistics, public transport, social dynamics, ecological systems, +and adaptation. drawing on these examples, we generalize common features of the sif +effect and suggest possible future lines of research.",9 +"abstract +let x be a negatively curved symmetric space and γ a non-cocompact lattice in isom(x). we show that +small, parabolic-preserving deformations of γ into the isometry group of any negatively curved symmetric +space containing x remain discrete and faithful (the cocompact case is due to guichard). this applies +in particular to a version of johnson-millson bending deformations, providing for all n infnitely many noncocompact lattices in so(n, 1) which admit discrete and faithful deformations into su(n, 1). we also produce +deformations of the figure-8 knot group into su(3, 1), not of bending type, to which the result applies.",4 +"abstract feature, implying that our results may generalize to feature +selectivity, we do not examine feature selectivity in this work.",2 +"abstract +the goal of this work is to extend the standard persistent homology pipeline for +exploratory data analysis to the 2-d persistence setting, in a practical, computationally +efficient way. to this end, we introduce rivet, a software tool for the visualization of +2-d persistence modules, and present mathematical foundations for this tool. rivet +provides an interactive visualization of the barcodes of 1-d affine slices of a 2-d persistence module m . it also computes and visualizes the dimension of each vector space in +m and the bigraded betti numbers of m . at the heart of our computational approach +is a novel data structure based on planar line arrangements, on which we can perform +fast queries to find the barcode of any slice of m . we present an efficient algorithm +for constructing this data structure and establish bounds on its complexity.",0 +"abstract +in this paper, a new approach to solve the cubic b-spline curve fitting problem is +presented based on a meta-heuristic algorithm called “dolphin echolocation”. the +method minimizes the proximity error value of the selected nodes that measured +using the least squares method and the euclidean distance method of the new curve +generated by the reverse engineering. the results of the proposed method are +compared with the genetic algorithm. as a result, this new method seems to be +successful. +keywords: b-spline curve approximation, cubic b-spline, data parameterization on b-spline, dolphin +echolocation algorithm, knot adjustment",9 +"abstract +in the classic integer programming (ip) problem, the objective is to decide whether, for a +given m × n matrix a and an m-vector b = (b1 , . . . , bm ), there is a non-negative integer n-vector +x such that ax = b. solving (ip) is an important step in numerous algorithms and it is important +to obtain an understanding of the precise complexity of this problem as a function of natural +parameters of the input. +two significant results in this line of research are the pseudo-polynomial time algorithms +for (ip) when the number of constraints is a constant [papadimitriou, j. acm 1981] and when +the branch-width of the column-matroid corresponding to the constraint matrix is a constant +[cunningham and geelen, ipco 2007]. in this paper, we prove matching upper and lower bounds +for (ip) when the path-width of the corresponding column-matroid is a constant. these lower +bounds provide evidence that the algorithm of cunningham and geelen, are probably optimal. +we also obtain a separate lower bound providing evidence that the algorithm of papadimitriou +is close to optimal.",8 +"abstract. we prove that if γ is a lattice in the group of isometries of a symmetric space of non-compact type without euclidean +factors, then the virtual cohomological dimension of γ equals its +proper geometric dimension.",4 +"abstract. we introduce the fractal expansions, sequences of integers associated to a number. these +can be used to characterize the o-sequences. we generalize them by introducing numerical functions +called fractal functions. we classify the hilbert functions of bigraded algebras by using fractal functions.",0 +"abstract in this paper we introduce and analyse langevin samplers that consist of perturbations +of the standard underdamped langevin dynamics. the perturbed dynamics is such that its invariant +measure is the same as that of the unperturbed dynamics. we show that appropriate choices of the +perturbations can lead to samplers that have improved properties, at least in terms of reducing the +asymptotic variance. we present a detailed analysis of the new langevin sampler for gaussian target +distributions. our theoretical results are supported by numerical experiments with non-gaussian target +measures.",10 +"abstract—for homeland and transportation security applications, 2d x-ray explosive detection system (eds) have been +widely used, but they have limitations in recognizing 3d shape +of the hidden objects. among various types of 3d computed +tomography (ct) systems to address this issue, this paper is +interested in a stationary ct using fixed x-ray sources and +detectors. however, due to the limited number of projection +views, analytic reconstruction algorithms produce severe streaking artifacts. inspired by recent success of deep learning approach +for sparse view ct reconstruction, here we propose a novel image +and sinogram domain deep learning architecture for 3d reconstruction from very sparse view measurement. the algorithm +has been tested with the real data from a prototype 9-view dual +energy stationary ct eds carry-on baggage scanner developed +by gemss medical systems, korea, which confirms the superior +reconstruction performance over the existing approaches.",2 +"abstract +we consider the problem of nonparametric estimation of the drift of a continuously observed one-dimensional diffusion with periodic drift. motivated by computational considerations, van der meulen et al. (2014) defined a prior on the drift as a randomly truncated +and randomly scaled faber-schauder series expansion with gaussian coefficients. we study +the behaviour of the posterior obtained from this prior from a frequentist asymptotic point +of view. if the true data generating drift is smooth, it is proved that the posterior is adaptive +with posterior contraction rates for the l 2 -norm that are optimal up to a log factor. contraction rates in l p -norms with p ∈ (2, ∞] are derived as well.",10 +"abstract +in the standard setting of approachability there are two players and a target set. the +players play repeatedly a known vector-valued game where the first player wants to have +the average vector-valued payoff converge to the target set which the other player tries to +exclude it from this set. we revisit this setting in the spirit of online learning and do not +assume that the first player knows the game structure: she receives an arbitrary vectorvalued reward vector at every round. she wishes to approach the smallest (“best”) possible +set given the observed average payoffs in hindsight. this extension of the standard setting +has implications even when the original target set is not approachable and when it is not +obvious which expansion of it should be approached instead. we show that it is impossible, +in general, to approach the best target set in hindsight and propose achievable though +ambitious alternative goals. we further propose a concrete strategy to approach these goals. +our method does not require projection onto a target set and amounts to switching between +scalar regret minimization algorithms that are performed in episodes. applications to global +cost minimization and to approachability under sample path constraints are considered. +keywords: approachability, online learning, multi-objective optimization",10 +"abstract +we propose expected policy gradients (epg), which unify stochastic policy gradients (spg) +and deterministic policy gradients (dpg) for reinforcement learning. inspired by expected +sarsa, epg integrates (or sums) across actions when estimating the gradient, instead of +relying only on the action in the sampled trajectory. for continuous action spaces, we first +derive a practical result for gaussian policies and quadric critics and then extend it to +an analytical method for the universal case, covering a broad class of actors and critics, +including gaussian, exponential families, and reparameterised policies with bounded support. +for gaussian policies, we show that it is optimal to explore using covariance proportional +to eh , where h is the scaled hessian of the critic with respect to the actions. epg also +provides a general framework for reasoning about policy gradient methods, which we use to +establish a new general policy gradient theorem, of which the stochastic and deterministic +policy gradient theorems are special cases. furthermore, we prove that epg reduces the +variance of the gradient estimates without requiring deterministic policies and with little +computational overhead. finally, we show that epg outperforms existing approaches on +six challenging domains involving the simulated control of physical systems. +keywords: policy gradients, exploration, bounded actions, reinforcement learning, markov +decision process (mdp)",2 +"abstract +variational autoencoders (vaes) learn representations of data by jointly training a probabilistic +encoder and decoder network. typically these models encode all features of the data into a +single variable. here we are interested in learning disentangled representations that encode +distinct aspects of the data into separate variables. we propose to learn such representations +using model architectures that generalise from standard vaes, employing a general graphical +model structure in the encoder and decoder. this allows us to train partially-specified models +that make relatively strong assumptions about a subset of interpretable variables and rely on +the flexibility of neural networks to learn representations for the remaining variables. we +further define a general objective for semi-supervised learning in this model class, which can be +approximated using an importance sampling procedure. we evaluate our framework’s ability +to learn disentangled representations, both by qualitative exploration of its generative capacity, +and quantitative evaluation of its discriminative ability on a variety of models and datasets.",2 +"abstract +we study divided power structures on finitely generated k-algebras, where k is a field of positive characteristic p. as an application we show examples of 0-dimensional gorenstein k-schemes that do not lift +to a fixed noetherian local ring of non-equal characteristic. we also show that frobenius neighbourhoods +of a singular point of a general hypersurface of large dimension have no liftings to mildly ramified rings +of non-equal characteristic.",0 +"abstract +convolutional neural networks (cnns) are being applied to an increasing number of problems and fields due to their superior performance in classification and regression tasks. since two of the key +operations that cnns implement are convolution and pooling, this +type of networks is implicitly designed to act on data described by +regular structures such as images. motivated by the recent interest +in processing signals defined in irregular domains, we advocate a +cnn architecture that operates on signals supported on graphs. the +proposed design replaces the classical convolution not with a nodeinvariant graph filter (gf), which is the natural generalization of convolution to graph domains, but with a node-varying gf. this filter +extracts different local features without increasing the output dimension of each layer and, as a result, bypasses the need for a pooling +stage while involving only local operations. a second contribution +is to replace the node-varying gf with a hybrid node-varying gf, +which is a new type of gf introduced in this paper. while the alternative architecture can still be run locally without requiring a pooling +stage, the number of trainable parameters is smaller and can be rendered independent of the data dimension. tests are run on a synthetic +source localization problem and on the 20news dataset. +index terms— convolutional neural networks, network data, +graph signal processing, node-varying graph filters. +1. introduction +convolutional neural networks (cnns) have shown remarkable performance in a wide array of inference and reconstruction tasks [1], +in fields as diverse as pattern recognition, computer vision and +medicine [2–4]. the objective of cnns is to find a computationally +feasible architecture capable of reproducing the behavior of a certain unknown function. typically, cnns consist of a succession of +layers, each of which performs three simple operations – usually on +the output of the previous layer – and feed the result into the next +layer. these three operations are: 1) convolution, 2) application +of a nonlinearity, and 3) pooling or downsampling. because the +classical convolution and downsampling operations are defined for +regular (grid-based) domains, cnns have been applied to act on +data modeled by such a regular structure, like time or images. +however, an accurate description of modern datasets such as +those in social networks or genetics [5, 6] calls for more general +irregular structures. a framework that has been gaining traction to +tackle these problems is that of graph signal processing (gsp) [7–9]. +gsp postulates that data can be modeled as a collection of values associated with the nodes of a graph, whose edges describe pairwise +relationships between the data. by exploiting the interplay between +the data and the graph, traditional signal processing concepts such +supported by usa nsf ccf 1717120 and aro w911nf1710438, and +spanish mineco tec2013-41604-r and tec2016-75361-r.",9 +abstract,8 +"abstract +hash tables are ubiquitous in computer science for efficient access +to large datasets. however, there is always a need for approaches that +offer compact memory utilisation without substantial degradation of +lookup performance. cuckoo hashing is an efficient technique of creating hash tables with high space utilisation and offer a guaranteed +constant access time. we are given n locations and m items. each +item has to be placed in one of the k ≥ 2 locations chosen by k random +hash functions. by allowing more than one choice for a single item, +cuckoo hashing resembles multiple choice allocations schemes. in addition it supports dynamically changing the location of an item among +its possible locations. we propose and analyse an insertion algorithm +for cuckoo hashing that runs in linear time with high probability and +in expectation. previous work on total allocation time has analysed +breadth first search, and it was shown to be linear only in expectation. +our algorithm finds an assignment (with probability 1) whenever it exists. in contrast, the other known insertion method, known as random +walk insertion, may run indefinitely even for a solvable instance. we +also present experimental results comparing the performance of our +algorithm with the random walk method, also for the case when each +location can hold more than one item. +as a corollary we obtain a linear time algorithm (with high probability and in expectation) for finding perfect matchings in a special +class of sparse random bipartite graphs. we support this by performing +experiments on a real world large dataset for finding maximum matchings in general large bipartite graphs. we report an order of magnitude +improvement in the running time as compared to the hopkraft-karp +matching algorithm. +∗",8 +"abstract classical +theories. in particular, we review theories which did not have any algorithmic content in their general natural framework, such as galois theory, +the dedekind rings, the finitely generated projective modules or the krull +dimension. +constructive algebra is actually an old discipline, developed among others +by gauss and kronecker. we are in line with the modern “bible” on +the subject, which is the book by ray mines, fred richman and wim +ruitenburg, a course in constructive algebra, published in 1988. we will +cite it in abbreviated form [mrr]. +this work corresponds to an msc graduate level, at least up to chapter xiv, +but only requires as prerequisites the basic notions concerning group theory, +linear algebra over fields, determinants, modules over commutative rings, +as well as the definition of quotient and localized rings. a familiarity with +polynomial rings, the arithmetic properties of z and euclidian rings is also +desirable. +finally, note that we consider the exercises and problems (a little over 320 +in total) as an essential part of the book. +we will try to publish the maximum amount of missing solutions, as well +as additional exercises on the web page of one of the authors: +http://hlombardi.free.fr/publis/livresbrochures.html +–v–",0 +"abstract—automated decision making systems are increasingly +being used in real-world applications. in these systems for the +most part, the decision rules are derived by minimizing the +training error on the available historical data. therefore, if +there is a bias related to a sensitive attribute such as gender, +race, religion, etc. in the data, say, due to cultural/historical discriminatory practices against a certain demographic, the system +could continue discrimination in decisions by including the said +bias in its decision rule. we present an information theoretic +framework for designing fair predictors from data, which aim to +prevent discrimination against a specified sensitive attribute in a +supervised learning setting. we use equalized odds as the criterion +for discrimination, which demands that the prediction should be +independent of the protected attribute conditioned on the actual +label. to ensure fairness and generalization simultaneously, we +compress the data to an auxiliary variable, which is used for the +prediction task. this auxiliary variable is chosen such that it is +decontaminated from the discriminatory attribute in the sense +of equalized odds. the final predictor is obtained by applying a +bayesian decision rule to the auxiliary variable. +index terms—fairness, equalized odds, supervised learning.",7 +"abstract +this paper investigates the fundamental limits for detecting a high-dimensional sparse matrix +contaminated by white gaussian noise from both the statistical and computational perspectives. +we consider p×p matrices whose rows and columns are individually k-sparse. we provide a tight +characterization of the statistical and computational limits for sparse matrix detection, which +precisely describe when achieving optimal detection is easy, hard, or impossible, respectively. +although the sparse matrices considered in this paper have no apparent submatrix structure and +the corresponding estimation problem has no computational issue at all, the detection problem +has a surprising computational barrier when the sparsity level k exceeds the cubic root of the +matrix size p: attaining the optimal detection boundary is computationally at least as hard as +solving the planted clique problem. +the same statistical and computational limits also hold in the sparse covariance matrix +model, where each variable is correlated with at most k others. a key step in the construction +of the statistically optimal test is a structural property for sparse matrices, which can be of +independent interest.",7 +"abstract +person re-identification (reid) is an important task in +computer vision. recently, deep learning with a metric +learning loss has become a common framework for reid. in +this paper, we propose a new metric learning loss with hard +sample mining called margin smaple mining loss (msml) +which can achieve better accuracy compared with other +metric learning losses, such as triplet loss. in experiments, +our proposed methods outperforms most of the state-ofthe-art algorithms on market1501, mars, cuhk03 and +cuhk-sysu.",1 +"abstract +this paper introduces a novel activity dataset which exhibits real-life and diverse scenarios of complex, temporallyextended human activities and actions. the dataset presents a set of videos of actors performing everyday activities +in a natural and unscripted manner. the dataset was recorded using a static kinect 2 sensor which is commonly +used on many robotic platforms. the dataset comprises of rgb-d images, point cloud data, automatically generated +skeleton tracks in addition to crowdsourced annotations. furthermore, we also describe the methodology used to +acquire annotations through crowdsourcing. finally some activity recognition benchmarks are presented using current +state-of-the-art techniques. we believe that this dataset is particularly suitable as a testbed for activity recognition +research but it can also be applicable for other common tasks in robotics/computer vision research such as object +detection and human skeleton tracking. +keywords +activity dataset, crowdsourcing",1 +"abstract +the past few years have seen a surge of interest in the field of probabilistic logic learning +and statistical relational learning. in this endeavor, many probabilistic logics have been +developed. problog is a recent probabilistic extension of prolog motivated by the mining +of large biological networks. in problog, facts can be labeled with probabilities. these +facts are treated as mutually independent random variables that indicate whether these +facts belong to a randomly sampled program. different kinds of queries can be posed +to problog programs. we introduce algorithms that allow the efficient execution of these +queries, discuss their implementation on top of the yap-prolog system, and evaluate their +performance in the context of large networks of biological entities. +to appear in theory and practice of logic programming (tplp)",6 +"abstract: we consider a parameter estimation problem for one dimensional stochastic heat equations, when data is sampled discretely in time or spatial component. we establish some +general results on derivation of consistent and asymptotically normal estimators based +on computation of the p-variations of stochastic processes and their smooth perturbations. we apply these results to the considered spdes, by using some convenient +representations of the solutions. for some equations such results were ready available, +while for other classes of spdes we derived the needed representations along with their +statistical asymptotical properties. we prove that the real valued parameter next to +the laplacian, and the constant parameter in front of the noise (the volatility) can +be consistently estimated by observing the solution at a fixed time and on a discrete +spatial grid, or at a fixed space point and at discrete time instances of a finite interval, +assuming that the mesh-size goes to zero. +keywords: p-variation, statistics for spdes, discrete sampling, stochastic heat equation, inverse +problems for spdes, malliavin calculus. +msc2010: 60h15, 35q30, 65l09",10 +"abstract +interference arises when an individual’s potential outcome depends on the individual treatment level, but also on the treatment level of others. a common +assumption in the causal inference literature in the presence of interference is partial interference, implying that the population can be partitioned in clusters of +individuals whose potential outcomes only depend on the treatment of units within +the same cluster. previous literature has defined average potential outcomes under +counterfactual scenarios where treatments are randomly allocated to units within +a cluster. however, within clusters there may be units that are more or less likely +to receive treatment based on covariates or neighbors’ treatment. we define estimands that describe average potential outcomes for realistic counterfactual treatment allocation programs taking into consideration the units’ covariates, as well +as dependence between units’ treatment assignment. we discuss these estimands, +propose unbiased estimators and derive asymptotic results as the number of clusters +grows. finally, we estimate effects in a comparative effectiveness study of power +plant emission reduction technologies on ambient ozone pollution.",10 +"abstract +i introduce and analyse an anytime version of the optimally confident ucb +(ocucb) algorithm designed for minimising the cumulative regret in finitearmed stochastic bandits with subgaussian noise. the new algorithm is simple, +intuitive (in hindsight) and comes with the strongest finite-time regret guarantees +for a horizon-free algorithm so far. i also show a finite-time lower bound that +nearly matches the upper bound.",10 +abstract,6 +"abstract. let g be a finite connected simple graph with d vertices and let +pg ⊂ rd be the edge polytope of g. we call pg decomposable if pg decomposes +into integral polytopes pg+ and pg− via a hyperplane. in this paper, we explore +various aspects of decomposition of pg : we give an algorithm deciding the decomposability of pg , we prove that pg is normal if and only if both pg+ and pg− +are normal, and we also study how a condition on the toric ideal of pg (namely, +the ideal being generated by quadratic binomials) behaves under decomposition.",0 +"abstract +context context-free grammars are widely used for language prototyping and implementation. they allow +formalizing the syntax of domain-specific or general-purpose programming languages concisely and declaratively. however, the natural and concise way of writing a context-free grammar is often ambiguous. therefore, +grammar formalisms support extensions in the form of declarative disambiguation rules to specify operator +precedence and associativity, solving ambiguities that are caused by the subset of the grammar that corresponds to expressions. +inquiry implementing support for declarative disambiguation within a parser typically comes with one +or more of the following limitations in practice: a lack of parsing performance, or a lack of modularity (i.e., +disallowing the composition of grammar fragments of potentially different languages). the latter subject +is generally addressed by scannerless generalized parsers. we aim to equip scannerless generalized parsers +with novel disambiguation methods that are inherently performant, without compromising the concerns of +modularity and language composition. +approach in this paper, we present a novel low-overhead implementation technique for disambiguating +deep associativity and priority conflicts in scannerless generalized parsers with lightweight data-dependency. +knowledge ambiguities with respect to operator precedence and associativity arise from combining the +various operators of a language. while shallow conflicts can be resolved efficiently by one-level tree patterns, +deep conflicts require more elaborate techniques, because they can occur arbitrarily nested in a tree. current +state-of-the-art approaches to solving deep priority conflicts come with a severe performance overhead. +grounding we evaluated our new approach against state-of-the-art declarative disambiguation mechanisms. by parsing a corpus of popular open-source repositories written in java and ocaml, we found that our +approach yields speedups of up to 1.73 x over a grammar rewriting technique when parsing programs with +deep priority conflicts—with a modest overhead of 1 % to 2 % when parsing programs without deep conflicts. +importance a recent empirical study shows that deep priority conflicts are indeed wide-spread in realworld programs. the study shows that in a corpus of popular ocaml projects on github, up to 17 % of the +source files contain deep priority conflicts. however, there is no solution in the literature that addresses efficient disambiguation of deep priority conflicts, with support for modular and composable syntax definitions. +acm ccs 2012 +software and its engineering → syntax; parsers; +keywords declarative disambiguation, data-dependent grammars, operator precedence, performance, +parsing",6 +"abstract model from +∗ this research was supported by eu grants #269921 (brainscales), #237955 (facets-itn), #604102 (human brain +project), the austrian science fund fwf #i753-n23 (pneuma) and the manfred stärk foundation.",9 +"abstract +averaging provides an alternative to bandwidth selection for density kernel estimation. we propose a procedure to combine linearly +several kernel estimators of a density obtained from different, possibly +data-driven, bandwidths. the method relies on minimizing an easily +tractable approximation of the integrated square error of the combination. it provides, at a small computational cost, a final solution +that improves on the initial estimators in most cases. the average +estimator is proved to be asymptotically as efficient as the best possible combination (the oracle), with an error term that decreases faster +than the minimax rate obtained with separated learning and validation samples. the performances are tested numerically, with results +that compare favorably to other existing procedures in terms of mean +integrated square errors.",10 +"abstract +this paper discusses the conceptual design and proof-of-concept flight demonstration of a novel variable pitch quadrotor +biplane unmanned aerial vehicle concept for payload delivery. the proposed design combines vertical takeoff and landing +(vtol), precise hover capabilities of a quadrotor helicopter and high range, endurance and high forward cruise speed +characteristics of a fixed wing aircraft. the proposed uav is designed for a mission requirement of carrying and delivering +6 kg payload to a destination at 16 km from the point of origin. first, the design of proprotors is carried out using a +physics based modified blade element momentum theory (bemt) analysis, which is validated using experimental data +generated for the purpose. proprotors have conflicting requirement for optimal hover and forward flight performance. next, +the biplane wings are designed using simple lifting line theory. the airframe design is followed by power plant selection +and transmission design. finally, weight estimation is carried out to complete the design process. the proprotor design +with 24◦ preset angle and -24◦ twist is designed based on 70% weightage to forward flight and 30% weightage to hovering +flight conditions. the operating rpm of the proprotors is reduced from 3200 during hover to 2000 during forward flight +to ensure optimal performance during cruise flight. the estimated power consumption during forward flight mode is 64% +less than that required for hover, establishing the benefit of this hybrid concept. a proof-of-concept scaled prototype is +fabricated using commercial-off-the-shelf parts. a pid controller is developed and implemented on the pixhawk board to +enable stable hovering flight and attitude tracking. +keywords +variable pitch, quadrotor tailsitter uav, uav design, blade element theory, payload delivery",3 +"abstract— reconstructing the states of the nodes of a dynamical network is a problem of fundamental importance in the +study of neuronal and genetic networks. an underlying related +problem is that of observability, i.e., identifying the conditions +under which such a reconstruction is possible. in this paper +we study observability of complex dynamical networks, where, +we consider the effects of network symmetries on observability. +we present an efficient algorithm that returns a minimal set +of necessary sensor nodes for observability in the presence of +symmetries.",8 +"abstract—increasingly large document collections require +improved information processing methods for searching, +retrieving, and organizing text. central to these information +processing methods is document classification, which has become +an important application for supervised learning. recently the +performance of traditional supervised classifiers has degraded as +the number of documents has increased. this is because along +with growth in the number of documents has come an increase +in the number of categories. this paper approaches this problem +differently from current document classification methods that +view the problem as multi-class classification. instead we +perform hierarchical classification using an approach we call +hierarchical deep learning for text classification (hdltex). +hdltex employs stacks of deep learning architectures to +provide specialized understanding at each level of the document +hierarchy.",1 +"abstract +we show that existence of a global polynomial lyapunov function for a homogeneous polynomial vector field or a planar polynomial vector field (under a mild condition) implies existence +of a polynomial lyapunov function that is a sum of squares (sos) and that the negative of its +derivative is also a sum of squares. this result is extended to show that such sos-based certificates of stability are guaranteed to exist for all stable switched linear systems. for this class of +systems, we further show that if the derivative inequality of the lyapunov function has an sos +certificate, then the lyapunov function itself is automatically a sum of squares. these converse +results establish cases where semidefinite programming is guaranteed to succeed in finding proofs +of lyapunov inequalities. finally, we demonstrate some merits of replacing the sos requirement +on a polynomial lyapunov function with an sos requirement on its top homogeneous component. +in particular, we show that this is a weaker algebraic requirement in addition to being cheaper +to impose computationally.",3 +"abstract +we present fashion-mnist, a new dataset comprising of 28 × 28 grayscale +images of 70, 000 fashion products from 10 categories, with 7, 000 images +per category. the training set has 60, 000 images and the test set has +10, 000 images. +fashion-mnist is intended to serve as a direct dropin replacement for the original mnist dataset for benchmarking machine +learning algorithms, as it shares the same image size, data format and the +structure of training and testing splits. the dataset is freely available at +https://github.com/zalandoresearch/fashion-mnist.",1 +"abstract +the prevention of dangerous chemical accidents is a primary problem of industrial manufacturing. in the accidents of dangerous chemicals, the oil gas explosion plays an important +role. the essential task of the explosion prevention is to estimate the better explosion limit +of a given oil gas. in this paper, support vector machines (svm) and logistic regression +(lr) are used to predict the explosion of oil gas. lr can get the explicit probability formula +of explosion, and the explosive range of the concentrations of oil gas according to the concentration of oxygen. meanwhile, svm gives higher accuracy of prediction. furthermore, +considering the practical requirements, the effects of penalty parameter on the distribution +of two types of errors are discussed. +keywords: explosion prediction, oil gas, svm, logistic regression, penalty parameter",5 +"abstract—in this paper, the structural controllability of +the systems over f(z) is studied using a new mathematical +method-matroids. firstly, a vector matroid is defined over +f(z). secondly, the full rank conditions of [ si  a | b]",3 +"abstract— we study planning problems where autonomous +agents operate inside environments that are subject to uncertainties and not fully observable. partially observable markov +decision processes (pomdps) are a natural formal model to +capture such problems. because of the potentially huge or +even infinite belief space in pomdps, synthesis with safety +guarantees is, in general, computationally intractable. we +propose an approach that aims to circumvent this difficulty: +in scenarios that can be partially or fully simulated in a virtual +environment, we actively integrate a human user to control +an agent. while the user repeatedly tries to safely guide the +agent in the simulation, we collect data from the human input. +via behavior cloning, we translate the data into a strategy +for the pomdp. the strategy resolves all nondeterminism and +non-observability of the pomdp, resulting in a discrete-time +markov chain (mc). the efficient verification of this mc gives +quantitative insights into the quality of the inferred human +strategy by proving or disproving given system specifications. +for the case that the quality of the strategy is not sufficient, we +propose a refinement method using counterexamples presented +to the human. experiments show that by including humans into +the pomdp verification loop we improve the state of the art +by orders of magnitude in terms of scalability.",2 +"abstraction for describing feedforward (and potentially recurrent) neural network architectures is that of computational skeletons as +introduced in daniely et al. (2016). recall the following definition. +definition 3.1. a computational skeleton s is a directed asyclic graph whose non-input nodes are +labeled by activations. +daniely et al. (2016) provides an excellent account of how these graph structures abstract the many +neural network architectures we see in practice. we will give these skeletons ""flesh and skin"" +so to speak, and in doing so pursure a suitable generalization of neural networks which allows +intermediate mappings between possibly infinite dimensional topological vector spaces. dfms are +that generalization. +definition 3.2 (deep function machines). a deep function machine d is a computational skeleton +s indexed by i with the following properties: +• every vertex in s is a topological vector space x` where ` ∈ i. +• if nodes ` ∈ a ⊂ i feed into `0 then the activation on `0 is denoted y ` ∈ x` and is defined +as +! +x   +0 +y` = g +t` y ` +(3.1) +`∈a",1 +"abstract +the features of non-stationary multi-component signals are often difficult to be extracted for expert +systems. in this paper, a new method for feature extraction that is based on maximization of local +gaussian correlation function of wavelet coefficients and signal is presented. the effect of empirical +mode decomposition (emd) to decompose multi-component signals to intrinsic mode functions +(imfs), before using of local gaussian correlation are discussed. the experimental vibration signals +from two gearbox systems are used to show the efficiency of the presented method. linear support +vector machine (svm) is utilized to classify feature sets extracted with the presented method. the +obtained results show that the features extracted in this method have excellent ability to classify +faults without any additional feature selection; it is also shown that emd can improve or degrade +features according to the utilized feature reduction method. +keywords: gear, gaussian-correlation, wavelet, emd, svm, fault detection +1. introduction +nowadays, vibration condition monitoring of industrial machines is used as a suitable tool for early +detection of variety faults. data acquisition, feature extraction, and classification are three general +parts of any expert monitoring systems. one the most difficult and important procedure in fault +diagnosis is feature extraction which is done by signal processing methods. there are various +techniques in signal processing, which are usually categorized to time (e.g. [1, 2]), frequency (e.g. +[3]), and time-frequency (e.g. [4, 5]) domain analyses. among these, time-frequency analyses have +attracted more attention because these methods provide an energy distribution of signal in timefrequency plane simultaneously, so frequency intensity of non-stationary signals can be analyzed in +time domain. +continuous wavelet transform (cwt), as a time-frequency representation of signal, provides an +effective tool for vibration-based signal in fault detection. cwt provides a multi-resolution +capability in analyzing the transitory features of non-stationary signals. behind the advantages of +cwt, there are some drawbacks; one of these is that cwt provides redundant data, so it makes +feature extraction more complicated. due to this data redundancy, data mining and feature reduction +are extensively used, such as decision trees (dt) (e.g. [6]), principal component analysis (pca) (e.g. +[7]), independent component analysis (ica) (e.g. [8-10]), genetic algorithm with support vector +machines (ga-svm) (e.g. [1, 2]), genetic algorithm with artificial neural networks (ga-ann) (e.g. +[1, 2]), self organizing maps (som) (e.g. [11]), and etc. +selection of wavelet bases is very important in order to indicate the maximal capability of features +extraction for desired faults. as an alternative, tse et al. [4] presented ""exact wavelet analysis"" for +selection of the best wavelet family member and reduction of data redundancy. in this method, for",5 +"abstract. convex polyhedral abstractions of logic programs have been found very +useful in deriving numeric relationships between program arguments in order to +prove program properties and in other areas such as termination and complexity +analysis. we present a tool for constructing polyhedral analyses of (constraint) +logic programs. the aim of the tool is to make available, with a convenient interface, state-of-the-art techniques for polyhedral analysis such as delayed widening, narrowing, “widening up-to”, and enhanced automatic selection of widening +points. the tool is accessible on the web, permits user programs to be uploaded +and analysed, and is integrated with related program transformations such as size +abstractions and query-answer transformation. we then report some experiments +using the tool, showing how it can be conveniently used to analyse transition +systems arising from models of embedded systems, and an emulator for a pic microcontroller which is used for example in wearable computing systems. we discuss +issues including scalability, tradeoffs of precision and computation time, and other +program transformations that can enhance the results of analysis.",6 +"abstract +notations in formulas +s(m ) +s +smax +λ(m ) +|λ|max +θ(.) +θ0 , θ̇ +win +w +wout +ut +ū∞ +xt +yt +ot +o˜t +d(., .) +f(.) +fū∞ +fcrit +qū∞ ,t +qt +φ1 () +φk () +η, κ, γ +λū∞",9 +"abstract +we consider a multi-agent framework for distributed optimization where each agent in the +network has access to a local convex function and the collective goal is to achieve consensus +on the parameters that minimize the sum of the agents’ local functions. we propose an algorithm wherein each agent operates asynchronously and independently of the other agents in +the network. when the local functions are strongly-convex with lipschitz-continuous gradients, +we show that a subsequence of the iterates at each agent converges to a neighbourhood of the +global minimum, where the size of the neighbourhood depends on the degree of asynchrony in +the multi-agent network. when the agents work at the same rate, convergence to the global minimizer is achieved. numerical experiments demonstrate that asynchronous subgradient-push +can minimize the global objective faster than state-of-the-art synchronous first-order methods, +is more robust to failing or stalling agents, and scales better with the network size.",3 +"abstract +we propose hilbert transform and analytic signal construction for signals over graphs. +this is motivated by the popularity of hilbert transform, analytic signal, and modulation analysis in conventional signal processing, and the observation that complementary insight is often obtained by viewing conventional signals in the graph setting. +our definitions of hilbert transform and analytic signal use a conjugate-symmetry-like +property exhibited by the graph fourier transform (gft), resulting in a ’one-sided’ +spectrum for the graph analytic signal. the resulting graph hilbert transform is shown +to possess many interesting mathematical properties and also exhibit the ability to highlight anomalies/discontinuities in the graph signal and the nodes across which signal +discontinuities occur. using the graph analytic signal, we further define amplitude, +phase, and frequency modulations for a graph signal. we illustrate the proposed concepts by showing applications to synthesized and real-world signals. for example, +we show that the graph hilbert transform can indicate presence of anomalies and that +graph analytic signal, and associated amplitude and frequency modulations reveal complementary information in speech signals. +keywords: graph signal, analytic signal, hilbert transform, demodulation, anomaly +detection. +email addresses: arunv@kth.se (arun venkitaraman), sach@kth.se (saikat chatterjee), +ph@kth.se (peter händel)",7 +abstract,1 +"abstract. draisma recently proved that polynomial representations of gl∞ are topologically noetherian. we generalize this result to algebraic representations of infinite rank +classical groups.",0 +"abstract—we propose the learned primal-dual algorithm for +tomographic reconstruction. the algorithm accounts for a (possibly non-linear) forward operator in a deep neural network by +unrolling a proximal primal-dual optimization method, but where +the proximal operators have been replaced with convolutional +neural networks. the algorithm is trained end-to-end, working +directly from raw measured data and it does not depend on any +initial reconstruction such as filtered back-projection (fbp). +we compare performance of the proposed method on low +dose computed tomography reconstruction against fbp, total +variation (tv), and deep learning based post-processing of fbp. +for the shepp-logan phantom we obtain > 6 db psnr improvement against all compared methods. for human phantoms +the corresponding improvement is 6.6 db over tv and 2.2 db +over learned post-processing along with a substantial improvement in the structural similarity index. finally, our algorithm +involves only ten forward-back-projection computations, making +the method feasible for time critical clinical applications. +index terms—inverse problems, tomography, deep learning, +primal-dual, optimization",9 +"abstract— in this work, we introduce a compositional framework for the construction of finite abstractions (a.k.a. symbolic +models) of interconnected discrete-time control systems. the +compositional scheme is based on the joint dissipativity-type +properties of discrete-time control subsystems and their finite +abstractions. in the first part of the paper, we use a notion +of so-called storage function as a relation between each subsystem and its finite abstraction to construct compositionally +a notion of so-called simulation function as a relation between +interconnected finite abstractions and that of control systems. +the derived simulation function is used to quantify the error +between the output behavior of the overall interconnected +concrete system and that of its finite abstraction. in the +second part of the paper, we propose a technique to construct +finite abstractions together with their corresponding storage +functions for a class of discrete-time control systems under some +incremental passivity property. we show that if a discrete-time +control system is so-called incrementally passivable, then one +can construct its finite abstraction by a suitable quantization +of the input and state sets together with the corresponding +storage function. finally, the proposed results are illustrated by +constructing a finite abstraction of a network of linear discretetime control systems and its corresponding simulation function +in a compositional way. the compositional conditions in this +example do not impose any restriction on the gains or the +number of the subsystems which, in particular, elucidates the +effectiveness of dissipativity-type compositional reasoning for +networks of systems.",3 +"abstract +a distributed discrete-time algorithm is proposed for multi-agent networks to achieve a +common least squares solution of a group of linear equations, in which each agent only knows +some of the equations and is only able to receive information from its nearby neighbors. for +fixed, connected, and undirected networks, the proposed discrete-time algorithm results in each +agents solution estimate to converging exponentially fast to the same least squares solution. +moreover, the convergence does not require careful choices of time-varying small step sizes.",3 +abstract,6 +"abstract +in this tool demonstration, we give an overview of the chameleon type debugger. the type debugger’s primary use is to identify locations within a source program which are involved in a type error. by further +examining these (potentially) problematic program locations, users gain a better understanding of their program and are able to work towards the actual mistake which was the cause of the type error. the debugger +is interactive, allowing the user to provide additional information to narrow down the search space. one +of the novel aspects of the debugger is the ability to explain erroneous-looking types. in the event that an +unexpected type is inferred, the debugger can highlight program locations which contributed to that result. +furthermore, due to the flexible constraint-based foundation that the debugger is built upon, it can naturally +handle advanced type system features such as haskell’s type classes and functional dependencies. +keywords :",6 +"abstract +a solution is provided in this note for the adaptive consensus problem of nonlinear multi-agent systems with unknown and non-identical +control directions assuming a strongly connected underlying graph +topology. this is achieved with the introduction of a novel variable +transformation called pi consensus error transformation. the new +variables include the position error of each agent from some arbitrary +fixed point along with an integral term of the weighted total displacement of the agent’s position from all neighbor positions. it is proven +that if these new variables are bounded and regulated to zero, then +asymptotic consensus among all agents is ensured. the important +feature of this transformation is that it provides input decoupling in +the dynamics of the new error variables making the consensus control design a simple and direct task. using classical nussbaum gain +based techniques, distributed controllers are designed to regulate the +pi consensus error variables to zero and ultimately solve the agreement +problem. the proposed approach also allows for a specific calculation +of the final consensus point based on the controller parameter selection and the associated graph topology. simulation results verify our +theoretical derivations.",3 +"abstract +current speech enhancement techniques operate on the spectral +domain and/or exploit some higher-level feature. the majority +of them tackle a limited number of noise conditions and rely on +first-order statistics. to circumvent these issues, deep networks +are being increasingly used, thanks to their ability to learn complex functions from large example sets. in this work, we propose the use of generative adversarial networks for speech enhancement. in contrast to current techniques, we operate at the +waveform level, training the model end-to-end, and incorporate 28 speakers and 40 different noise conditions into the same +model, such that model parameters are shared across them. we +evaluate the proposed model using an independent, unseen test +set with two speakers and 20 alternative noise conditions. the +enhanced samples confirm the viability of the proposed model, +and both objective and subjective evaluations confirm the effectiveness of it. with that, we open the exploration of generative architectures for speech enhancement, which may progressively incorporate further speech-centric design choices to +improve their performance. +index terms: speech enhancement, deep learning, generative +adversarial networks, convolutional neural networks.",9 +"abstract +automatic summarisation is a popular approach to reduce a document to its main +arguments. recent research in the area has +focused on neural approaches to summarisation, which can be very data-hungry. +however, few large datasets exist and none +for the traditionally popular domain of scientific publications, which opens up challenging research avenues centered on encoding large, complex documents. in this +paper, we introduce a new dataset for +summarisation of computer science publications by exploiting a large resource +of author provided summaries and show +straightforward ways of extending it further. we develop models on the dataset +making use of both neural sentence encoding and traditionally used summarisation features and show that models which +encode sentences as well as their local and global context perform best, significantly outperforming well-established +baseline methods.",2 +"abstract. we give a new computer-assisted proof of the classification of maximal subgroups of the simple group 2 e6 (2) and its extensions by any subgroup +of the outer automorphism group s3 . this is not a new result, but no earlier +proof exists in the literature. a large part of the proof consists of a computational analysis of subgroups generated by an element of order 2 and an element +of order 3. this method can be effectively automated, and via statistical analysis also provides a sanity check on results that may have been obtained by +delicate theoretical arguments.",4 +"abstract—recognizing objects in natural images is an +intricate problem involving multiple conflicting objectives. +deep convolutional neural networks, trained on large datasets, +achieve convincing results and are currently the state-of-theart approach for this task. however, the long time needed to +train such deep networks is a major drawback. we tackled +this problem by reusing a previously trained network. for +this purpose, we first trained a deep convolutional network +on the ilsvrc -12 dataset. we then maintained the learned +convolution kernels and only retrained the classification part +on different datasets. using this approach, we achieved an +accuracy of 67.68 % on cifar -100, compared to the previous +state-of-the-art result of 65.43 %. furthermore, our findings +indicate that convolutional networks are able to learn generic +feature extractors that can be used for different tasks.",1 +"abstract +this paper presents a computational model for the cooperation of constraint domains +and an implementation for a particular case of practical importance. the computational +model supports declarative programming with lazy and possibly higher-order functions, +predicates, and the cooperation of different constraint domains equipped with their respective solvers, relying on a so-called constraint functional logic programming (cf lp ) +scheme. the implementation has been developed on top of the cf lp system t oy, +supporting the cooperation of the three domains h, r and fd, which supply equality and +disequality constraints over symbolic terms, arithmetic constraints over the real numbers, +and finite domain constraints over the integers, respectively. the computational model +has been proved sound and complete w.r.t. the declarative semantics provided by the +cf lp scheme, while the implemented system has been tested with a set of benchmarks +and shown to behave quite efficiently in comparison to the closest related approach we are +aware of. +to appear in theory and practice of logic programming (tplp) +keywords: cooperating constraint domains, constraint functional logic programming, constrained lazy narrowing, implementation.",6 +"abstract +monte carlo (mc) simulations of transport in random porous networks indicate that for high variances of the lognormal permeability distribution, the transport of a passive tracer is non-fickian. here we model this non-fickian +dispersion in random porous networks using discrete temporal markov models. we show that such temporal models +capture the spreading behavior accurately. this is true despite the fact that the slow velocities are strongly correlated +in time, and some studies have suggested that the persistence of low velocities would render the temporal markovian +model inapplicable. compared to previously proposed temporal stochastic differential equations with case specific drift +and diffusion terms, the models presented here require fewer modeling assumptions. moreover, we show that discrete +temporal markov models can be used to represent dispersion in unstructured networks, which are widely used to model +porous media. a new method is proposed to extend the state space of temporal markov models to improve the model +predictions in the presence of extremely low velocities in particle trajectories and extend the applicability of the model +to higher temporal resolutions. finally, it is shown that by combining multiple transitions, temporal models are more +efficient for computing particle evolution compared to correlated ctrw with spatial increments that are equal to the +lengths of the links in the network. +keywords: anomalous transport, markov models, stochastic transport modeling, stencil method +1. introduction +modeling transport in porous media is highly important in various applications including water resources +management and extraction of fossil fuels. predicting +flow and transport in aquifers and reservoirs plays an +important role in managing these resources. a significant +factor influencing transport is the heterogeneity of the +flow field, which results from the underlying heterogeneity +of the conductivity field. transport in such heterogeneous +domains displays non-fickian characteristics such as +long tails for the first arrival time probability density +function (pdf) and non-gaussian spatial distributions +(berkowitz et al., 2006; bouchaud and georges, 1990; +edery et al., 2014). capturing this non-fickian behavior +is particularly important for predictions of contaminant +transport in water resources. for example, in water +resources management long tails of the arrival time pdf +can have a major impact on the contamination of drinking +water, and therefore efficient predictions of the spatial +extents of contaminant plumes is key (nowak et al., 2012; +moslehi and de barros, 2017; ghorbanidehno et al., 2015).",5 +"abstract. we classify conjugacy classes of involutions in the isometry groups +of nondegenerate, symmetric bilinear forms over the field f2 . the new component of this work focuses on the case of an orthogonal form on an evendimensional space. in this context we show that the involutions satisfy a +remarkable duality, and we investigate several numerical invariants.",4 +"abstract +deep convolutional networks (cnns) have exhibited +their potential in image inpainting for producing plausible results. however, in most existing methods, e.g., context encoder, the missing parts are predicted by propagating +the surrounding convolutional features through a fully connected layer, which intends to produce semantically plausible but blurry result. in this paper, we introduce a special shift-connection layer to the u-net architecture, namely +shift-net, for filling in missing regions of any shape with +sharp structures and fine-detailed textures. to this end, the +encoder feature of the known region is shifted to serve as +an estimation of the missing parts. a guidance loss is introduced on decoder feature to minimize the distance between the decoder feature after fully connected layer and +the ground truth encoder feature of the missing parts. with +such constraint, the decoder feature in missing region can +be used to guide the shift of encoder feature in known +region. an end-to-end learning algorithm is further developed to train the shift-net. experiments on the paris +streetview and places datasets demonstrate the efficiency +and effectiveness of our shift-net in producing sharper, finedetailed, and visually plausible results.",1 +"abstract: we consider multivariate polynomials and investigate how many zeros of +multiplicity at least r they can have over a cartesian product of finite subsets of a field. +here r is any prescribed positive integer and the definition of multiplicity that we use is the +one related to hasse derivatives. as a generalization of material in [2, 5] a general version of +the schwartz-zippel was presented in [8] which from the leading monomial – with respect to +a lexicographic ordering – estimates the sum of zeros when counted with multiplicity. the +corresponding corollary on the number of zeros of multiplicity at least r is in general not +sharp and therefore in [8] a recursively defined function d was introduced using which one +can derive improved information. the recursive function being rather complicated, the only +known closed formula consequences of it are for the case of two variables [8]. in the present +paper we derive closed formula consequences for arbitrary many variables, but for the powers +in the leading monomial being not too large. our bound can be viewed as a generalization of +the footprint bound [10, 6] – the classical footprint bound taking not multiplicity into account.",0 +"abstract +1",1 +"abstract +in this paper an autoregressive time series model with conditional heteroscedasticity is +considered, where both conditional mean and conditional variance function are modeled +nonparametrically. a test for the model assumption of independence of innovations from +past time series values is suggested. the test is based on an weighted l2 -distance of +empirical characteristic functions. the asymptotic distribution under the null hypothesis +of independence is derived and consistency against fixed alternatives is shown. a smooth +autoregressive residual bootstrap procedure is suggested and its performance is shown in +a simulation study.",10 +"abstract. recurrent neural networks and in particular long shortterm memory (lstm) networks have demonstrated state-of-the-art accuracy in several emerging artificial intelligence tasks. however, the +models are becoming increasingly demanding in terms of computational +and memory load. emerging latency-sensitive applications including mobile robots and autonomous vehicles often operate under stringent computation time constraints. in this paper, we address the challenge of deploying computationally demanding lstms at a constrained time budget +by introducing an approximate computing scheme that combines iterative low-rank compression and pruning, along with a novel fpga-based +lstm architecture. combined in an end-to-end framework, the approximation method’s parameters are optimised and the architecture is configured to address the problem of high-performance lstm execution in +time-constrained applications. quantitative evaluation on a real-life image captioning application indicates that the proposed methods required +up to 6.5× less time to achieve the same application-level accuracy compared to a baseline method, while achieving an average of 25× higher +accuracy under the same computation time constraints. +keywords: lstm, low-rank approximation, pruning, fpgas",1 +"abstract. there has been substantial interest in estimating the value of a graph parameter, i.e., +of a real-valued function defined on the set of finite graphs, by querying a randomly sampled +substructure whose size is independent of the size of the input. graph parameters that may be +successfully estimated in this way are said to be testable or estimable, and the sample complexity +qz = qz (ε) of an estimable parameter z is the size of a random sample of a graph g required to +ensure that the value of z(g) may be estimated within an error of ε with probability at least 2/3. in +this paper, for any fixed monotone graph property p = forb(f), we study the sample complexity +of estimating a bounded graph parameter zf that, for an input graph g, counts the number of +spanning subgraphs of g that satisfy p. to improve upon previous upper bounds on the sample +complexity, we show that the vertex set of any graph that satisfies a monotone property p may be +partitioned equitably into a constant number of classes in such a way that the cluster graph induced +by the partition is not far from satisfying a natural weighted graph generalization of p. properties +for which this holds are said to be recoverable, and the study of recoverable properties may be of +independent interest.",8 +"abstract +the next generation wireless networks (i.e. 5g and beyond), which would be extremely dynamic and +complex due to the ultra-dense deployment of heterogeneous networks (hetnets), poses many critical +challenges for network planning, operation, management and troubleshooting. at the same time, generation +and consumption of wireless data are becoming increasingly distributed with ongoing paradigm shift from +people-centric to machine-oriented communications, making the operation of future wireless networks even +more complex. in mitigating the complexity of future network operation, new approaches of intelligently +utilizing distributed computational resources with improved context-awareness becomes extremely important. +in this regard, the emerging fog (edge) computing architecture aiming to distribute computing, storage, control, +communication, and networking functions closer to end users, have a great potential for enabling efficient +operation of future wireless networks. these promising architectures make the adoption of artificial intelligence +(ai) principles which incorporate learning, reasoning and decision-making mechanism, as natural choices +for designing a tightly integrated network. towards this end, this article provides a comprehensive survey +on the utilization of ai integrating machine learning, data analytics and natural language processing (nlp) +techniques for enhancing the efficiency of wireless network operation. in particular, we provide comprehensive +discussion on the utilization of these techniques for efficient data acquisition, knowledge discovery, network +planning, operation and management of the next generation wireless networks. a brief case study utilizing the +ai techniques for this network has also been provided. +keywords– 5g and beyond, artificial (machine) intelligence, context-aware-wireless, ml, nlp, ontology",7 +"abstract. in photoacoustic imaging (pa), delay-and-sum (das) beamformer is a common beamforming algorithm +having a simple implementation. however, it results in a poor resolution and high sidelobes. to address these challenges, a new algorithm namely delay-multiply-and-sum (dmas) was introduced having lower sidelobes compared +to das. to improve the resolution of dmas, a novel beamformer is introduced using minimum variance (mv) adaptive beamforming combined with dmas, so-called minimum variance-based dmas (mvb-dmas). it is shown +that expanding the dmas equation results in multiple terms representing a das algebra. it is proposed to use the +mv adaptive beamformer instead of the existing das. mvb-dmas is evaluated numerically and experimentally. in +particular, at the depth of 45 mm mvb-dmas results in about 31 db, 18 db and 8 db sidelobes reduction compared +to das, mv and dmas, respectively. the quantitative results of the simulations show that mvb-dmas leads to +improvement in full-width-half-maximum about 96 %, 94 % and 45 % and signal-to-noise ratio about 89 %, 15 % and +35 % compared to das, dmas, mv, respectively. in particular, at the depth of 33 mm of the experimental images, +mvb-dmas results in about 20 db sidelobes reduction in comparison with other beamformers. +keywords: photoacoustic imaging, beamforming, delay-multiply-and-sum, minimum variance, linear-array imaging. +*ali mahloojifar, mahlooji@modares.ac.ir",7 +"abstract a markov decision process (mdp) framework is adopted to represent +ensemble control of devices with cyclic energy consumption patterns, e.g., thermostatically controlled loads. specifically we utilize and develop the class of mdp +models previously coined linearly solvable mdps, that describe optimal dynamics +of the probability distribution of an ensemble of many cycling devices. two principally different settings are discussed. first, we consider optimal strategy of the +ensemble aggregator balancing between minimization of the cost of operations and +minimization of the ensemble welfare penalty, where the latter is represented as a +kl-divergence between actual and normal probability distributions of the ensemble. +then, second, we shift to the demand response setting modeling the aggregator’s +task to minimize the welfare penalty under the condition that the aggregated consumption matches the targeted time-varying consumption requested by the system +operator. we discuss a modification of both settings aimed at encouraging or constraining the transitions between different states. the dynamic programming feature +of the resulting modified mdps is always preserved; however, ‘linear solvability’ is +lost fully or partially, depending on the type of modification. we also conducted +some (limited in scope) numerical experimentation using the formulations of the +first setting. we conclude by discussing future generalizations and applications.",3 +"abstract. let k be a field of characteristic zero and a a kalgebra such that all the k-subalgebras generated by finitely many +elements of a are finite dimensional over k. a k-e-derivation of +a is a k-linear map of the form i − φ for some k-algebra endomorphism φ of a, where i denotes the identity map of a. in +this paper we first show that for all locally finite k-derivations +d and locally finite k-algebra automorphisms φ of a, the images of d and i − φ do not contain any nonzero idempotent of +a. we then use this result to show some cases of the lfed and +lned conjectures proposed in [z4]. more precisely, we show the +lned conjecture for a, and the lfed conjecture for all locally +finite k-derivations of a and all locally finite k-e-derivations of +the form δ = i − φ with φ being surjective. in particular, both +conjectures are proved for all finite dimensional k-algebras. furthermore, some finite extensions of derivations and automorphism +to inner derivations and inner automorphisms, respectively, have +also been established. this result is not only crucial in the proofs +of the results above, but also interesting on its own right.",0 +"abstract +a univariate polynomial f over a field is decomposable if f = +g ◦ h = g(h) for nonlinear polynomials g and h. in order to count the +decomposables, one wants to know, under a suitable normalization, the +number of equal-degree collisions of the form f = g ◦ h = g ∗ ◦ h∗ with +(g, h) 6= (g ∗ , h∗ ) and deg g = deg g ∗ . such collisions only occur in the +wild case, where the field characteristic p divides deg f . reasonable +bounds on the number of decomposables over a finite field are known, +but they are less sharp in the wild case, in particular for degree p2 . +we provide a classification of all polynomials of degree p2 with +a collision. it yields the exact number of decomposable polynomials +of degree p2 over a finite field of characteristic p. we also present +an efficient algorithm that determines whether a given polynomial of +degree p2 has a collision or not.",0 +"abstract +inhomogeneous random graph models encompass many network models such as stochastic block +models and latent position models. we consider the problem of statistical estimation of the matrix of +connection probabilities based on the observations of the adjacency matrix of the network. taking the +stochastic block model as an approximation, we construct estimators of network connection probabilities +– the ordinary block constant least squares estimator, and its restricted version. we show that they +satisfy oracle inequalities with respect to the block constant oracle. as a consequence, we derive optimal +rates of estimation of the probability matrix. our results cover the important setting of sparse networks. +another consequence consists in establishing upper bounds on the minimax risks for graphon estimation +in the l2 norm when the probability matrix is sampled according to a graphon model. these bounds +include an additional term accounting for the “agnostic” error induced by the variability of the latent +unobserved variables of the graphon model. in this setting, the optimal rates are influenced not only +by the bias and variance components as in usual nonparametric problems but also include the third +component, which is the agnostic error. the results shed light on the differences between estimation +under the empirical loss (the probability matrix estimation) and under the integrated loss (the graphon +estimation).",10 +"abstract +motivation: intimately tied to assembly quality is the complexity of the de bruijn graph built by the +assembler. thus, there have been many paradigms developed to decrease the complexity of the de bruijn +graph. one obvious combinatorial paradigm for this is to allow the value of k to vary; having a larger value +of k where the graph is more complex and a smaller value of k where the graph would likely contain fewer +spurious edges and vertices. one open problem that affects the practicality of this method is how to predict +the value of k prior to building the de bruijn graph. we show that optimal values of k can be predicted +prior to assembly by using the information contained in a phylogenetically-close genome and therefore, help +make the use of multiple values of k practical for genome assembly. +results: we present hyda-vista, which is a genome assembler that uses homology information to choose +a value of k for each read prior to the de bruijn graph construction. the chosen k is optimal if there are +no sequencing errors and the coverage is sufficient. fundamental to our method is the construction of the +maximal sequence landscape, which is a data structure that stores for each position in the input string, the +largest repeated substring containing that position. in particular, we show the maximal sequence landscape +can be constructed in o(n+n log n)-time and o(n)-space. hyda-vista first constructs the maximal sequence +landscape for a homologous genome. the reads are then aligned to this reference genome, and values of k are +assigned to each read using the maximal sequence landscape and the alignments. eventually, all the reads +are assembled by an iterative de bruijn graph construction method. our results and comparison to other +assemblers demonstrate that hyda-vista achieves the best assembly of e. coli before repeat resolution or +scaffolding. +availability: hyda-vista is freely available at https://sites.google.com/site/hydavista. the code +for constructing the maximal sequence landscape and the choosing the optimal value of k for each read is +also on the website and could be incorporated into any genome assembler. +contact: basir@cs.colostate.edu",5 +"abstract. euclidean functions with values in an arbitrary well-ordered set +were first considered in a 1949 work of motzkin and studied in more detail +in work of fletcher, samuel and nagata in the 1970’s and 1980’s. here these +results are revisited, simplified, and extended. the two main themes are (i) +consideration of ord-valued functions on an artinian poset and (ii) use of +ordinal arithmetic, including the hessenberg-brookfield ordinal sum. in particular, to any euclidean ring we associate an ordinal invariant, its euclidean +order type, and we initiate a study of this invariant. the main new result +gives upper and lower bounds on the euclidean order type of a finite product +of euclidean rings in terms of the euclidean order types of the factor rings.",0 +"abstract we study the problem of computing the maxima of a set of +n d-dimensional points. for dimensions 2 and 3, there are algorithms to +solve the problem with order-oblivious instance-optimal running time. +however, in higher dimensions there is still room for improvements. we +present an algorithm sensitive to the structural entropy of the input set, +which improves the running time, for large classes of instances, on the +best solution for maxima to date for d ≥ 4.",8 +"abstract +in [8] one of the authors constructed uncountable families of groups of type f p +and of n-dimensional poincaré duality groups for each n ≥ 4. we show that the +groups constructed in [8] comprise uncountably many quasi-isometry classes. we +deduce that for each n ≥ 4 there are uncountably many quasi-isometry classes of +acyclic n-manifolds admitting free cocompact properly discontinuous discrete group +actions.",4 +"abstract: this paper develops a novel approach to obtaining the optimal scheduling strategy in a multi-input multi-output (mimo) +multi-access channel (mac), where each transmitter is powered by an individual energy harvesting process. relying on the stateof-the-art convex optimization tools, the proposed approach provides a low-complexity block coordinate ascent algorithm to obtain +the optimal transmission policy that maximizes the weighted sum-throughput for mimo mac. the proposed approach can provide +the optimal benchmarks for all practical schemes in energy-harvesting powered mimo mac transmissions. based on the revealed +structure of the optimal policy, we also propose an efficient online scheme, which requires only causal knowledge of energy arrival +realizations. numerical results are provided to demonstrate the merits of the proposed novel scheme.",7 +"abstract +this paper presents findings for training a q-learning reinforcement learning agent +using natural gradient techniques. we compare the original deep q-network (dqn) +algorithm to its natural gradient counterpart (ngdqn), measuring ngdqn and +dqn performance on classic controls environments without target networks. we +find that ngdqn performs favorably relative to dqn, converging to significantly +better policies faster and more frequently. these results indicate that natural +gradient could be used for value function optimization in reinforcement learning to +accelerate and stabilize training.",2 +"abstract +epileptic seizure activity shows complicated dynamics in both space and time. to understand +the evolution and propagation of seizures spatially extended sets of data need to be analysed. +we have previously described an efficient filtering scheme using variational laplace that can be +used in the dynamic causal modelling (dcm) +framework (friston et al., 2003) to estimate the +temporal dynamics of seizures recorded using either invasive or non-invasive electrical recordings (eeg/ecog). spatiotemporal dynamics are +modelled using a partial differential equation – +in contrast to the ordinary differential equation +used in our previous work on temporal estimation +of seizure dynamics (cooray et al., 2016). we +provide the requisite theoretical background for +the method and test the ensuing scheme on simulated seizure activity data and empirical invasive +ecog data. the method provides a framework +to assimilate the spatial and temporal dynamics +of seizure activity, an aspect of great physiological and clinical importance.",9 +"abstract +this paper is concerned with the properties of gaussian random fields defined on a +riemannian homogeneous space, under the assumption that the probability distribution be invariant under the isometry group of the space. we first indicate, building +on early results on yaglom, how the available information on group-representationtheory-related special functions makes it possible to give completely explicit descriptions of these fields in many cases of interest. we then turn to the expected size of the +zero-set: extending two-dimensional results from optics and neuroscience, we show +that every invariant field comes with a natural unit of volume (defined in terms of the +geometrical redundancies in the field) with respect to which the average size of the +zero-set depends only on the dimension of the source and target spaces, and not on +the precise symmetry exhibited by the field. both the volume unit and the associated +density of zeroes can in principle be evaluated from a single sample of the field, and +our result provides a numerical signature for the fact that a given individual map be +a sample from an invariant gaussian field.",4 +"abstract +we consider a phase retrieval problem, where we want to reconstruct +a n-dimensional vector from its phaseless scalar products with m sensing +vectors, independently sampled from complex normal distributions. we +show that, with a suitable initalization procedure, the classical algorithm +of alternating projections succeeds with high probability when m ≥ cn, +for some c > 0. we conjecture that this result is still true when no special +initialization procedure is used, and present numerical experiments that +support this conjecture.",10 +"abstract: +antitubercular activity of sulfathiazole derivitives series were subjected +to quantitative structure activity +relationship (qsar) analysis with an attempt to derive and understand a correlation between the biologically +activity as dependent variable and various descriptors as independent variables. qsar models generated using 28 +compounds. several statistical regression expressions were obtained using partial least squares (pls) regression +,multiple linear regression (mlr) and principal component regression (pcr) methods. the among these +methods, partial least square regression (pls) method has shown very promising result as compare to other two +methods. a qsar model was generated by a training set of 18 molecules with correlation coefficient r ( ) of +0.9191 , significant cross validated correlation coefficient ( ) of 0.8300 , f test of 53.5783 , +for external test set +( +-3.6132, coefficient of correlation of predicted data set +partial least squares regression method.",5 +"abstract +quantum computing is a promising approach of computation that is based on equations from quantum mechanics. a +simulator for quantum algorithms must be capable of performing heavy mathematical matrix transforms. the design of +the simulator itself takes one of three forms: quantum turing machine, network model or circuit model of connected +gates or, quantum programming language, yet, some simulators are hybrid. +we studied previous simulators and then we adopt features from three simulators of different implementation +languages, different paradigms, and for different platforms. they are quantum computing language (qcl), quasi, +and quantum optics toolbox for matlab 5. our simulator for quantum algorithms takes the form of a package or a +programming library for quantum computing, with a case study showing the ability of using it in the circuit model. +the .net is a promising platform for computing. vb.net is an easy, high productive programming language with +the full power and functionality provided by the .net framework. it is highly readable, writeable, and flexible +language, compared to another language such as c#.net in many aspects. we adopted vb.net although its shortage +in built-in mathematical complex and matrix operations, compared to matlab. +for implementation, we first built a mathematical core of matrix operations. then, we built a quantum core which +contains: basic qubits and register operations, basic 1d, 2d, and 3d quantum gates, and multi-view visualization of the +quantum state, then a window for demos to show you how to use and get the most of the package. +keywords: quantum computing, quantum simulator, quantum programming language, q# , a quantum computation +package , .net platform, turing machine, quantum circuit model, quantum gates.",6 +"abstract. we adapt the construction of the grothendieck group associated +to a commutative monoı̈d to handle idempotent monoı̈ds. our construction works for a restricted class of commutative monoı̈ds, it agrees with the +grothendieck group construction in many cases and yields a hypergroup which +solves the universal problem for morphisms to hypergroups. it gives the expected non-trivial hypergroup construction in the case of idempotent monoı̈ds.",0 +"abstract. this article deals with problems related to efficient sensor placement in linear time-invariant discrete-time systems with partial state observations. the output matrix is assumed to be constrained in the sense that +the set of states that each output can measure are pre-specified. two problems are addressed assuming purely structural conditions at the level of only +the interconnections between the system being known. 1) we establish that +identifying the minimal number of sensors required to ensure a desired structural observability index is np-complete. 2) we propose an efficient greedy +strategy for selecting a fixed number of sensors from the given set of sensors in +order to maximize the number of states structurally observable in the system. +we identify a large class of systems for which both the problems are solvable +in polynomial time using simple greedy algorithms to provide best approximate solutions. an illustration of the techniques developed here is given on +the benchmark ieee 118-bus power network, which has ∼ 400 states in its +linearized model.",3 +"abstract +biometric authentication is important for a large +range of systems, including but not limited to consumer electronic devices such as phones. understanding the limits of and attacks on such systems +is therefore crucial. this paper presents an attack on fingerprint recognition system using masterprints, synthetic fingerprints that are capable +of spoofing multiple people’s fingerprints. the +method described is the first to generate complete +image-level masterprints, and further exceeds the +attack accuracy of previous methods that could +not produce complete images. the method, latent variable evolution, is based on training a +generative adversarial network on a set of real +fingerprint images. stochastic search in the form +of the covariance matrix adaptation evolution +strategy is then used to search for latent variable +(inputs) to the generator network that optimize +the number of matches from a fingerprint recognizer. we find masterprints that a commercial +fingerprint system matches to 23% of all users in +a strict security setting, and 77% of all users at +a looser security setting. the underlying method +is likely to have broad usefulness for security research as well as in aesthetic domains.",1 +"abstract +we present new capacity upper bounds for the discrete-time poisson channel with no dark current +and an average-power constraint. these bounds are a simple consequence of techniques developed by one +of the authors for the seemingly unrelated problem of upper bounding the capacity of binary deletion and +repetition channels. previously, the best known capacity upper bound in the regime where the averagepower constraint does not approach zero was due to martinez (josa b, 2007), which we re-derive as a +special case of our framework. furthermore, we instantiate our framework to obtain a closed-form bound +that noticeably improves the result of martinez everywhere.",7 +"abstract +we study the problem of 2-dimensional orthogonal range counting with additive error. given +a set p of n points drawn from an n × n grid and an error parameter ε, the goal is to build +a data structure, such that for any orthogonal range r, it can return the number of points in +p ∩ r with additive error εn. a well-known solution for this problem is the ε-approximation, +which is a subset a ⊆ p that can estimate the number of points in p ∩ r with the number of +points in a ∩ r. it is known that an ε-approximation of size o( 1ε log2.5 1ε ) exists for any p with +respect to orthogonal ranges, and the best lower bound is ω( 1ε log 1ε ). +the ε-approximation is a rather restricted data structure, as we are not allowed to store any +information other than the coordinates of the points in p . in this paper, we explore what can be +achieved without any restriction on the data structure. we first describe a simple data structure +that uses o( 1ε (log2 1ε + log n)) bits and answers queries with error εn. we then prove a lower +bound that any data structure that answers queries with error εn must use ω( 1ε (log2 1ε + log n)) +bits. our lower bound is information-theoretic: we show that there is a collection of 2ω(n log n) +point sets with large union combinatorial discrepancy, and thus are hard to distinguish unless +we use ω(n log n) bits.",8 +"abstract +we present gradual type theory, a logic and type theory for call-by-name gradual typing. we define the +central constructions of gradual typing (the dynamic type, type casts and type error) in a novel way, by +universal properties relative to new judgments for gradual type and term dynamism, which were developed +in blame calculi and to state the “gradual guarantee” theorem of gradual typing. combined with the +ordinary extensionality (η) principles that type theory provides, we show that most of the standard +operational behavior of casts is uniquely determined by the gradual guarantee. this provides a semantic +justification for the definitions of casts, and shows that non-standard definitions of casts must violate +these principles. our type theory is the internal language of a certain class of preorder categories called +equipments. we give a general construction of an equipment interpreting gradual type theory from a +2-category representing non-gradual types and programs, which is a semantic analogue of findler and +felleisen’s definitions of contracts, and use it to build some concrete domain-theoretic models of gradual +typing.",6 +"abstract +deep neural network (dnn) acoustic models have yielded +many state-of-the-art results in automatic speech recognition +(asr) tasks. more recently, recurrent neural network (rnn) +models have been shown to outperform dnns counterparts. +however, state-of-the-art dnn and rnn models tend to be impractical to deploy on embedded systems with limited computational capacity. traditionally, the approach for embedded platforms is to either train a small dnn directly, or to train a small +dnn that learns the output distribution of a large dnn. in this +paper, we utilize a state-of-the-art rnn to transfer knowledge +to small dnn. we use the rnn model to generate soft alignments and minimize the kullback-leibler divergence against +the small dnn. the small dnn trained on the soft rnn alignments achieved a 3.93 wer on the wall street journal (wsj) +eval92 task compared to a baseline 4.54 wer or more than 13% +relative improvement. +index terms: deep neural networks, recurrent neural networks, automatic speech recognition, model compression, +embedded platforms",9 +"abstract +small objects detection is a challenging task in computer vision due to its limited resolution and information. in order +to solve this problem, the majority of existing methods sacrifice speed for improvement in accuracy. in this paper, we +aim to detect small objects at a fast speed, using the best object detector single shot multibox detector (ssd) with +respect to accuracy-vs-speed trade-off as base architecture. we propose a multi-level feature fusion method for +introducing contextual information in ssd, in order to improve the accuracy for small objects. in detailed fusion +operation, we design two feature fusion modules, concatenation module and element-sum module, different in the way of +adding contextual information. experimental results show that these two fusion modules obtain higher map on pascal +voc2007 than baseline ssd by 1.6 and 1.7 points respectively, especially with 2-3 points improvement on some small +objects categories. the testing speed of them is 43 and 40 fps respectively, superior to the state of the art +deconvolutional single shot detector (dssd) by 29.4 and 26.4 fps. +keywords: small object detection, feature fusion, real-time, single shot multi-box detector.",7 +"abstract +non-orthogonal multiple access (noma) has attracted much recent attention owing to its capability +for improving the system spectral efficiency in wireless communications. deploying noma in heterogeneous network can satisfy users’ explosive data traffic requirements, and noma will likely play an +important role in the fifth-generation (5g) mobile communication networks. however, noma brings +new technical challenges on resource allocation due to the mutual cross-tier interference in heterogeneous +networks. in this article, to study the tradeoff between data rate performance and energy consumption +in noma, we examine the problem of energy-efficient user scheduling and power optimization in 5g +noma heterogeneous networks. the energy-efficient user scheduling and power allocation schemes +are introduced for the downlink 5g noma heterogeneous network for perfect and imperfect channel +state information (csi) respectively. simulation results show that the resource allocation schemes can +significantly increase the energy efficiency of 5g noma heterogeneous network for both cases of +perfect csi and imperfect csi.",7 +"abstract. chest x-ray is the most common medical imaging exam used +to assess multiple pathologies. automated algorithms and tools have the +potential to support the reading workflow, improve efficiency, and reduce reading errors. with the availability of large scale data sets, several methods have been proposed to classify pathologies on chest x-ray +images. however, most methods report performance based on random +image based splitting, ignoring the high probability of the same patient +appearing in both training and test set. in addition, most methods fail to +explicitly incorporate the spatial information of abnormalities or utilize +the high resolution images. we propose a novel approach based on location aware dense networks (dnetloc), whereby we incorporate both +high-resolution image data and spatial information for abnormality classification. we evaluate our method on the largest data set reported in the +community, containing a total of 86,876 patients and 297,541 chest x-ray +images. we achieve (i) the best average auc score for published training +and test splits on the single benchmarking data set (chestx-ray14 [1]), +and (ii) improved auc scores when the pathology location information +is explicitly used. to foster future research we demonstrate the limitations of the current benchmarking setup [1] and provide new reference +patient-wise splits for the used data sets. this could support consistent +and meaningful benchmarking of future methods on the largest publicly +available data sets.",2 +"abstract. we investigate the action of outer automorphisms of finite groups of lie +type on their irreducible characters. we obtain a definite result for cuspidal characters. +as an application we verify the inductive mckay condition for some further infinite +families of simple groups at certain primes.",4 +"abstract +we propose an ecg denoising method based on a feed forward neural +network with three hidden layers. particulary useful for very noisy signals, +this approach uses the available ecg channels to reconstruct a noisy +channel. we tested the method, on all the records from physionet mitbih arrhythmia database, adding electrode motion artifact noise. this +denoising method improved the perfomance of publicly available ecg +analysis programs on noisy ecg signals. this is an offline method that +can be used to remove noise from very corrupted holter records.",5 +"abstract—network transfer and disk read are the most time +consuming operations in the repair process for node failures in +erasure-code-based distributed storage systems. recent developments on reed-solomon codes, the most widely used erasure +codes in practical storage systems, have shown that efficient repair +schemes specifically tailored to these codes can significantly reduce +the network bandwidth spent to recover single failures. however, +the i/o cost, that is, the number of disk reads performed in these +repair schemes remains largely unknown. we take the first step to +address this gap in the literature by investigating the i/o costs of +some existing repair schemes for full-length reed-solomon codes.",7 +"abstract. subset sum and k-sat are two of the most extensively studied problems in computer +science, and conjectures about their hardness are among the cornerstones of fine-grained complexity. +one of the most intriguing open problems in this area is to base the hardness of one of these problems +on the other. +our main result is a tight reduction from k-sat to subset sum on dense instances, proving that +bellman’s 1962 pseudo-polynomial o∗ (t )-time algorithm for subset-sum on n numbers and target t +cannot be improved to time t 1−ε · 2o(n) for any ε > 0, unless the strong exponential time hypothesis +(seth) fails. this is one of the strongest known connections between any two of the core problems of +fine-grained complexity. +as a corollary, we prove a “direct-or” theorem for subset sum under seth, offering a new tool +for proving conditional lower bounds: it is now possible to assume that deciding whether one out of n +given instances of subset sum is a yes instance requires time (n t )1−o(1) . as an application of this +corollary, we prove a tight seth-based lower bound for the classical bicriteria s, t-path problem, +which is extensively studied in operations research. we separate its complexity from that of subset +sum: on graphs with m edges and edge lengths bounded by l, we show that the o(lm) pseudopolynomial time algorithm by joksch from 1966 cannot be improved to õ(l + m), in contrast to a +recent improvement for subset sum (bringmann, soda 2017).",8 +"abstract +in evolutionary biology, the speciation history of living organisms is represented graphically by a phylogeny, that is, a rooted tree whose leaves correspond to current species and +branchings indicate past speciation events. phylogenies are commonly estimated from molecular sequences, such as dna sequences, collected from the species of interest. at a high level, +the idea behind this inference is simple: the further apart in the tree of life are two species, +the greater is the number of mutations to have accumulated in their genomes since their most +recent common ancestor. in order to obtain accurate estimates in phylogenetic analyses, it +is standard practice to employ statistical approaches based on stochastic models of sequence +evolution on a tree. for tractability, such models necessarily make simplifying assumptions +about the evolutionary mechanisms involved. in particular, commonly omitted are insertions +and deletions of nucleotides—also known as indels. +properly accounting for indels in statistical phylogenetic analyses remains a major challenge in computational evolutionary biology. here we consider the problem of reconstructing +ancestral sequences on a known phylogeny in a model of sequence evolution incorporating nucleotide substitutions, insertions and deletions, specifically the classical tkf91 process. we +focus on the case of dense phylogenies of bounded height, which we refer to as the taxon-rich +setting, where statistical consistency is achievable. we give the first polynomial-time ancestral reconstruction algorithm with provable guarantees under constant rates of mutation. our +algorithm succeeds when the phylogeny satisfies the “big bang” condition, a necessary and +sufficient condition for statistical consistency in this context.",10 +"abstract +over the years, many different indexing techniques and search algorithms +have been proposed, including css-trees, csb+ -trees, k-ary binary search, +and fast architecture sensitive tree search. there have also been papers on +how best to set the many different parameters of these index structures, such +as the node size of csb+ -trees. +these indices have been proposed because cpu speeds have been increasing at a dramatically higher rate than memory speeds, giving rise to the von +neumann cpu–memory bottleneck. to hide the long latencies caused by +memory access, it has become very important to well-utilize the features of +modern cpus. in order to drive down the average number of cpu clock +cycles required to execute cpu instructions, and thus increase throughput, it +has become important to achieve a good utilization of cpu resources. some +of these are the data and instruction caches, and the translation lookaside +buffers. but it also has become important to avoid branch misprediction +penalties, and utilize vectorization provided by cpus in the form of simd +instructions. +while the layout of index structures has been heavily optimized for the +data cache of modern cpus, the instruction cache has been neglected so far. +in this paper, we present nitrogen, a framework for utilizing code generation +for speeding up index traversal in main memory database systems. by +bringing together data and code, we make index structures use the dormant +resource of the instruction cache. we show how to combine index compilation +with previous approaches, such as binary tree search, cache-sensitive tree +search, and the architecture-sensitive tree search presented by kim et al. +v",8 +"abstract +we consider a transportation system of heterogeneously connected vehicles, where not all vehicles are able to communicate. heterogeneous connectivity in transportation systems +is coupled to practical constraints such that (i) not all vehicles may be equipped with devices having communication +interfaces, (ii) some vehicles may not prefer to communicate +due to privacy and security reasons, and (iii) communication links are not perfect and packet losses and delay occur +in practice. in this context, it is crucial to develop control +algorithms by taking into account the heterogeneity. in this +paper, we particularly focus on making traffic phase scheduling decisions. we develop a connectivity-aware traffic phase +scheduling algorithm for heterogeneously connected vehicles +that increases the intersection efficiency (in terms of the average number of vehicles that are allowed to pass the intersection) by taking into account the heterogeneity. the simulation results show that our algorithm significantly improves +the efficiency of intersections as compared to the baselines.",3 +abstract,1 +"abstract +in this paper, we propose a single-agent logic of +goal-directed knowing how extending the standard +epistemic logic of knowing that with a new knowing how operator. the semantics of the new operator is based on the idea that knowing how to +achieve φ means that there exists a (uniform) strategy such that the agent knows that it can make sure +φ. we give an intuitive axiomatization of our logic +and prove the soundness, completeness and decidability of the logic. the crucial axioms relating +knowing that and knowing how illustrate our understanding of knowing how in this setting. this +logic can be used in representing both knowledgethat and knowledge-how.",2 +"abstract +surveys can be viewed as programs, complete with logic, +control flow, and bugs. word choice or the order in which +questions are asked can unintentionally bias responses. vague, +confusing, or intrusive questions can cause respondents to +abandon a survey. surveys can also have runtime errors: inattentive respondents can taint results. this effect is especially +problematic when deploying surveys in uncontrolled settings, +such as on the web or via crowdsourcing platforms. because +the results of surveys drive business decisions and inform scientific conclusions, it is crucial to make sure they are correct. +we present s urvey m an, a system for designing, deploying, and automatically debugging surveys. survey authors +write their surveys in a lightweight domain-specific language +aimed at end users. s urvey m an statically analyzes the survey to provide feedback to survey authors before deployment. +it then compiles the survey into javascript and deploys it either to the web or a crowdsourcing platform. s urvey m an’s +dynamic analyses automatically find survey bugs, and control +for the quality of responses. we evaluate s urvey m an’s +algorithms analytically and empirically, demonstrating its +effectiveness with case studies of social science surveys conducted via amazon’s mechanical turk.",6 +"abstract +in this paper, the performance of multiple-input multiple-output non-orthogonal multiple access +(mimo-noma) is investigated when multiple users are grouped into a cluster. the superiority of +mimo-noma over mimo orthogonal multiple access (mimo-oma) in terms of both sum channel +capacity and ergodic sum capacity is proved analytically. furthermore, it is demonstrated that the more +users are admitted to a cluster, the lower is the achieved sum rate, which illustrates the tradeoff between +the sum rate and maximum number of admitted users. on this basis, a user admission scheme is proposed, +which is optimal in terms of both sum rate and number of admitted users when the signal-to-interferenceplus-noise ratio thresholds of the users are equal. when these thresholds are different, the proposed +scheme still achieves good performance in balancing both criteria. moreover, under certain conditions, +it maximizes the number of admitted users. in addition, the complexity of the proposed scheme is linear +to the number of users per cluster. simulation results verify the superiority of mimo-noma over +mimo-oma in terms of both sum rate and user fairness, as well as the effectiveness of the proposed +user admission scheme.",7 +"abstract +this paper introduces and approximately solves a multi-component +problem where small rectangular items are produced from large rectangular bins via guillotine cuts. an item is characterized by its width, height, +due date, and earliness and tardiness penalties per unit time. each item +induces a cost that is proportional to its earliness and tardiness. items cut +from the same bin form a batch, whose processing and completion times +depend on its assigned items. the items of a batch have the completion +time of their bin. the objective is to find a cutting plan that minimizes +the weighted sum of earliness and tardiness penalties. we address this +problem via a constraint programming (cp) based heuristic (cph) and +an agent based modelling heuristic (abh). cph is an impact-based search +strategy, implemented in the general-purpose solver ibm cp optimizer. +abh is constructive. it builds a solution through repeated negotiations +between the set of agents representing the items and the set representing the bins. the agents cooperate to minimize the weighted earlinesstardiness penalties. the computational investigation shows that cph +outperforms abh on small-sized instances while the opposite prevails for +larger instances.",8 +"abstract view that our behavioral +types provide on osgi. +• a first implementation of a finite automata based behavioral type system for osgi that integrates +different tools and workflows into a framework. +• early versions of editors and related code for supporting adaption and checking. +• an exemplarily integration of behavioral type checkers comprising minimization, normalization +and comparison. one checker has been implemented in plain java. additionally we have integrated +a checker and synthesis tool presented in [12] for deciding compatibility, deadlock freedom and +detecting conflicts in non-deterministic specifications at runtime and development time. +• usage scenarios (interaction protocols) of our behavioral types for osgi at runtime and development time. +• the modeling of an example system: a booking system to show different usage scenarios. +b. buhnova, l. happe, j. kofroň: +formal engineering approaches to software components +and architectures 2013 (fesca’13) +eptcs 108, 2013, pp. 79–93, doi:10.4204/eptcs.108.6",6 +"abstract. a principled approach to the design of program verification and construction tools is applied to separation logic. the control flow is modelled by +power series with convolution as separating conjunction. a generic construction +lifts resource monoids to assertion and predicate transformer quantales. the data +flow is captured by concrete store/heap models. these are linked to the separation +algebra by soundness proofs. verification conditions and transformation laws are +derived by equational reasoning within the predicate transformer quantale. this +separation of concerns makes an implementation in the isabelle/hol proof assistant simple and highly automatic. the resulting tool is correct by construction; +it is explained on the classical linked list reversal example.",6 +"abstract +knowledge representation is a long-history topic +in ai, which is very important. a variety of models have been proposed for knowledge graph embedding, which projects symbolic entities and relations into continuous vector space. however, +most related methods merely focus on the datafitting of knowledge graph, and ignore the interpretable semantic expression. thus, traditional +embedding methods are not friendly for applications that require semantic analysis, such as +question answering and entity retrieval. to this +end, this paper proposes a semantic representation +method for knowledge graph (ksr), which imposes a two-level hierarchical generative process +that globally extracts many aspects and then locally assigns a specific category in each aspect for +every triple. since both aspects and categories are +semantics-relevant, the collection of categories in +each aspect is treated as the semantic representation of this triple. extensive experiments show that +our model outperforms other state-of-the-art baselines substantially.",2 +"abstract—face aging has raised considerable attentions and +interest from the computer vision community in recent years. +numerous approaches ranging from purely image processing +techniques to deep learning structures have been proposed in +literature. in this paper, we aim to give a review of recent +developments of modern deep learning based approaches, i.e. +deep generative models, for face aging task. their structures, +formulation, learning algorithms as well as synthesized results +are also provided with systematic discussions. moreover, the +aging databases used in most methods to learn the aging process +are also reviewed. +keywords-face aging, face age progression, deep generative models.",1 +"abstract +policy optimization methods have shown great promise in +solving complex reinforcement and imitation learning tasks. +while model-free methods are broadly applicable, they often +require many samples to optimize complex policies. modelbased methods greatly improve sample-efficiency but at the +cost of poor generalization, requiring a carefully handcrafted +model of the system dynamics for each task. recently, hybrid methods have been successful in trading off applicability +for improved sample-complexity. however, these have been +limited to continuous action spaces. in this work, we present +a new hybrid method based on an approximation of the dynamics as an expectation over the next state under the current policy. this relaxation allows us to derive a novel hybrid policy gradient estimator, combining score function and +pathwise derivative estimators, that is applicable to discrete +action spaces. we show significant gains in sample complexity, ranging between 1.7 and 25×, when learning parameterized policies on cart pole, acrobot, mountain car and hand +mass. our method is applicable to both discrete and continuous action spaces, when competing pathwise methods are +limited to the latter.",2 +"abstract +motivated by the increasing need to understand the algorithmic foundations of distributed +large-scale graph computations, we study a number of fundamental graph problems in a messagepassing model for distributed computing where k ≥ 2 machines jointly perform computations on +graphs with n nodes (typically, n  k). the input graph is assumed to be initially randomly +partitioned among the k machines, a common implementation in many real-world systems. +communication is point-to-point, and the goal is to minimize the number of communication +rounds of the computation. +our main result is an (almost) optimal distributed randomized algorithm for graph connectivity. our algorithm runs in õ(n/k 2 ) rounds (õ notation hides a polylog(n) factor and +an additive polylog(n) term). this improves over the best previously known bound of õ(n/k) +[klauck et al., soda 2015], and is optimal (up to a polylogarithmic factor) in view of an existing +lower bound of ω̃(n/k 2 ). our improved algorithm uses a bunch of techniques, including linear +graph sketching, that prove useful in the design of efficient distributed graph algorithms. using +the connectivity algorithm as a building block, we then present fast randomized algorithms for +computing minimum spanning trees, (approximate) min-cuts, and for many graph verification +problems. all these algorithms take õ(n/k 2 ) rounds, and are optimal up to polylogarithmic +factors. we also show an almost matching lower bound of ω̃(n/k 2 ) rounds for many graph +verification problems by leveraging lower bounds in random-partition communication complexity.",8 +"abstract +we propose a framework that learns a representation transferable across different +domains and tasks in a label efficient manner. our approach battles domain shift +with a domain adversarial loss, and generalizes the embedding to novel task using +a metric learning-based approach. our model is simultaneously optimized on +labeled source data and unlabeled or sparsely labeled data in the target domain. +our method shows compelling results on novel classes within a new domain even +when only a few labeled examples per class are available, outperforming the +prevalent fine-tuning approach. in addition, we demonstrate the effectiveness of +our framework on the transfer learning task from image object recognition to video +action recognition.",1 +abstract,7 +"abstract +we start with a simple introduction to topological data analysis where the most +popular tool is called a persistent diagram. briefly, a persistent diagram is a multiset of points in the plane describing the persistence of topological features of a +compact set when a scale parameter varies. since statistical methods are difficult +to apply directly on persistence diagrams, various alternative functional summary +statistics have been suggested, but either they do not contain the full information of +the persistence diagram or they are two-dimensional functions. we suggest a new +functional summary statistic that is one-dimensional and hence easier to handle, +and which under mild conditions contains the full information of the persistence diagram. its usefulness is illustrated in statistical settings concerned with point clouds +and brain artery trees. the appendix includes additional methods and examples, +together with technical details. the r-code used for all examples is available at +http://people.math.aau.dk/~christophe/rcode.zip.",10 +"abstract. in this work, we present a deep learning framework for multiclass breast cancer image classification as our submission to the international conference on image analysis and recognition (iciar) 2018 +grand challenge on breast cancer histology images (bach). as these +histology images are too large to fit into gpu memory, we first propose +using inception v3 to perform patch level classification. the patch level +predictions are then passed through an ensemble fusion framework involving majority voting, gradient boosting machine (gbm), and logistic +regression to obtain the image level prediction. we improve the sensitivity of the normal and benign predicted classes by designing a dual path +network (dpn) to be used as a feature extractor where these extracted +features are further sent to a second layer of ensemble prediction fusion +using gbm, logistic regression, and support vector machine (svm) to refine predictions. experimental results demonstrate our framework shows +a 12.5% improvement over the state-of-the-art model.",1 +"abstract +segmentation of histopathology sections is an ubiquitous requirement in digital +pathology and due to the large variability of biological tissue, machine learning +techniques have shown superior performance over standard image processing methods. as part of the glas@miccai2015 colon gland segmentation challenge, we +present a learning-based algorithm to segment glands in tissue of benign and malignant colorectal cancer. images are preprocessed according to the hematoxylineosin staining protocol and two deep convolutional neural networks (cnn) are +trained as pixel classifiers. the cnn predictions are then regularized using a +figure-ground segmentation based on weighted total variation to produce the final +segmentation result. on two test sets, our approach achieves a tissue classification +accuracy of 98% and 94%, making use of the inherent capability of our system to +distinguish between benign and malignant tissue.",1 +"abstract— in this paper, a progressive learning technique for multi -class classification is proposed. this newly developed +learning technique is independent of the number of class constraints and it can learn new classes while still retaining the +knowledge of previous classes. whenever a new class (non-native to the knowledge learnt thus far) is encountered, the +neural network structure gets remodeled automatically by facilitating new neurons and interconnections , and the +parameters are calculated in such a way that it retains the knowledge learnt thus far. this technique is suitable for realworld applications where the number of classes is often unknown and online learning from real -time data is required. the +consistency and the complexity of the progressive learning technique are analyzed. s everal standard datasets are used to +evaluate the performance of the developed technique. a comparative study shows that the developed technique is superior.",9 +"abstract +in our previous papers we have described efficient and reliable methods of generation of representative volume elements (rve) perfectly suitable for analysis of composite materials via stochastic +homogenization. +in this paper we profit from these methods to analyze the influence of the morphology on the +effective mechanical properties of the samples. more precisely, we study the dependence of main +mechanical characteristics of a composite medium on various parameters of the mixture of inclusions +composed of spheres and cylinders. on top of that we introduce various imperfections to inclusions +and observe the evolution of effective properties related to that. +the main computational approach used throughout the work is the fft-based homogenization +technique, validated however by comparison with the direct finite elements method. we give details +on the features of the method and the validation campaign as well. +keywords: composite materials, cylindrical and spherical reinforcements, mechanical properties, stochastic +homogenization",5 +"abstract. in this article, we will prove a full topological version of popa’s measurable cocycle superrigidity theorem for full shifts [36]. more precisely, we prove +that every hölder continuous cocycle for the full shifts of every finitely generated +group g that has one end, undistorted elements and sub-exponential divergence +function is cohomologous to a group homomorphism via a hölder continuous transfer map if the target group is complete and admits a compatible bi-invariant metric. using the ideas of behrstock, druţu, mosher, mozes and sapir [4, 5, 17, 18], +we show that the class of our acting groups is large including wide groups having undistorted elements and one-ended groups with strong thick of finite orders. +as a consequence, irreducible uniform lattices of most of higher rank connected +semisimple lie groups, mapping class groups of g-genus surfaces with p-punches, +g ≥ 2, p ≥ 0; richard thompson groups f, t, v ; aut(fn ), out(fn ), n ≥ 3; certain +(2 dimensional)-coxeter groups; and one-ended right-angled artin groups are in +our class. this partially extends the main result in [12].",4 +"abstract +this technical note addresses the distributed fixed-time consensus protocol design problem for +multi-agent systems with general linear dynamics over directed communication graphs. by using motion +planning approaches, a class of distributed fixed-time consensus algorithms are developed, which rely +only on the sampling information at some sampling instants. for linear multi-agent systems, the proposed +algorithms solve the fixed-time consensus problem for any directed graph containing a directed spanning +tree. in particular, the settling time can be off-line pre-assigned according to task requirements. compared +with the existing results for multi-agent systems, to our best knowledge, it is the first-time to solve fixedtime consensus problems for general linear multi-agent systems over directed graphs having a directed +spanning tree. extensions to the fixed-time formation flying are further studied for multiple satellites +described by hill equations. +index terms +fixed-time consensus, linear multi-agent system, directed graph, pre-specified settling time, directed +spanning tree.",3 +"abstract +we study the problem of estimating multivariate log-concave probability density +functions. we prove the first sample complexity upper bound for learning log-concave +densities on rd , for all d ≥ 1. prior to our work, no upper bound on the sample +complexity of this learning problem was known for the case of d > 3. +in more detail, +we give an estimator that, for any d ≥ 1 and ǫ > 0, draws + +õd (1/ǫ)(d+5)/2 samples from an unknown target log-concave density on rd , and +outputs a hypothesis that (with high probability) is ǫ-close to the target, in total variation distance. our upper bound + on the sample complexity comes close to the known +lower bound of ωd (1/ǫ)(d+1)/2 for this problem.",7 +"abstract +we design fast dynamic algorithms for proper vertex and edge colorings in a graph undergoing edge +insertions and deletions. in the static setting, there are simple linear time algorithms for (∆ + 1)- vertex +coloring and (2∆ − 1)-edge coloring in a graph with maximum degree ∆. it is natural to ask if we can +efficiently maintain such colorings in the dynamic setting as well. we get the following three results. (1) +we present a randomized algorithm which maintains a (∆ + 1)-vertex coloring with o(log ∆) expected +amortized update time. (2) we present a deterministic algorithm which maintains a (1 + o(1))∆-vertex +coloring with o(polylog ∆) amortized update time. (3) we present a simple, deterministic algorithm +which maintains a (2∆ − 1)-edge coloring with +√ o(log ∆) worst-case update time. this improves the +recent o(∆)-edge coloring algorithm with õ( ∆) worst-case update time [bm17].",8 +"abstracting the details of parallel implementation from the developer. most existing libraries provide +implementations of skeletons that are defined over flat data types such as lists or arrays. however, +skeleton-based parallel programming is still very challenging as it requires intricate analysis of the +underlying algorithm and often uses inefficient intermediate data structures. further, the algorithmic +structure of a given program may not match those of list-based skeletons. in this paper, we present +a method to automatically transform any given program to one that is defined over a list and is more +likely to contain instances of list-based skeletons. this facilitates the parallel execution of a transformed program using existing implementations of list-based parallel skeletons. further, by using an +existing transformation called distillation in conjunction with our method, we produce transformed +programs that contain fewer inefficient intermediate data structures.",6 +"abstract +in recent years, there has been tremendous progress in automated +synthesis techniques that are able to automatically generate code +based on some intent expressed by the programmer. a major challenge for the adoption of synthesis remains in having the programmer communicate their intent. when the expressed intent is coarsegrained (for example, restriction on the expected type of an expression), the synthesizer often produces a long list of results for the +programmer to choose from, shifting the heavy-lifting to the user. +an alternative approach, successfully used in end-user synthesis is +programming by example (pbe), where the user leverages examples +to interactively and iteratively refine the intent. however, using only +examples is not expressive enough for programmers, who can observe the generated program and refine the intent by directly relating +to parts of the generated program. +we present a novel approach to interacting with a synthesizer +using a granular interaction model. our approach employs a rich +interaction model where (i) the synthesizer decorates a candidate +program with debug information that assists in understanding the +program and identifying good or bad parts, and (ii) the user is +allowed to provide feedback not only on the expected output of a +program, but also on the underlying program itself. that is, when the +user identifies a program as (partially) correct or incorrect, they can +also explicitly indicate the good or bad parts, to allow the synthesizer +to accept or discard parts of the program instead of discarding the +program as a whole. +we show the value of our approach in a controlled user study. +our study shows that participants have strong preference to using +granular feedback instead of examples, and are able to provide +granular feedback much faster.",6 +"abstract +we study bisimulation and context equivalence in a probabilistic λ-calculus. the contributions of this paper are threefold. firstly we show a technique for proving congruence +of probabilistic applicative bisimilarity. while the technique follows howe’s method, some +of the technicalities are quite different, relying on non-trivial “disentangling” properties for +sets of real numbers. secondly we show that, while bisimilarity is in general strictly finer +than context equivalence, coincidence between the two relations is attained on pure λ-terms. +the resulting equality is that induced by levy-longo trees, generally accepted as the finest +extensional equivalence on pure λ-terms under a lazy regime. finally, we derive a coinductive +characterisation of context equivalence on the whole probabilistic language, via an extension +in which terms akin to distributions may appear in redex position. another motivation for the +extension is that its operational semantics allows us to experiment with a different congruence +technique, namely that of logical bisimilarity.",6 +"abstract +in this paper, we conduct an empirical study on discovering +the ordered collective dynamics obtained by a population of +artificial intelligence (ai) agents. our intention is to put ai +agents into a simulated natural context, and then to understand their induced dynamics at the population level. in particular, we aim to verify if the principles developed in the +real world could also be used in understanding an artificiallycreated intelligent population. to achieve this, we simulate a +large-scale predator-prey world, where the laws of the world +are designed by only the findings or logical equivalence that +have been discovered in nature. we endow the agents with +the intelligence based on deep reinforcement learning, and +scale the population size up to millions. our results show that +the population dynamics of ai agents, driven only by each +agent’s individual self interest, reveals an ordered pattern that +is similar to the lotka-volterra model studied in population +biology. we further discover the emergent behaviors of collective adaptations in studying how the agents’ grouping behaviors will change with the environmental resources. both of +the two findings could be explained by the self-organization +theory in nature.",2 +"abstract +an element g of a finite group g is said to be vanishing in g if there exists an +irreducible character χ of g such that χ(g) = 0; in this case, g is also called a zero +of g. the aim of this paper is to obtain structural properties of a factorised group +g = ab when we impose some conditions on prime power order elements g ∈ a ∪ b +which are (non-)vanishing in g. +keywords finite groups · products of groups · irreducible characters · conjugacy +classes · vanishing elements +2010 msc 20d40 · 20c15 · 20e45",4 +"abstract. it is shown that the algebra h ∞ of bounded dirichlet series +is not a coherent ring, and has infinite bass stable rank. as corollaries +of the latter result, it is derived that h ∞ has infinite topological stable +rank and infinite krull dimension.",0 +"abstract +we show how to efficiently obtain the algebraic normal form of boolean +functions vanishing on hamming spheres centred at zero. by exploiting the +symmetry of the problem we obtain formulas for particular cases, and a +computational method to address the general case. a list of all the polynomials corresponding to spheres of radius up to 64 is provided. moreover, +we explicitly provide a connection to the binary möbius transform of the +elementary symmetric functions. we conclude by presenting a method based +on polynomial evaluation to compute the minimum distance of binary linear +codes. +keywords: binary polynomials, binary möbius transform, elementary +symmetric functions, minimum distance, linear codes +1. introduction +many computationally hard problems can be described by boolean polynomial systems, and the standard approach is the computation of the gröbner +basis of the corresponding ideal. since it is a quite common scenario, we will +restrict ourselves to ideals of f2 [x1 , . . . , xn ] containing the entire set of field +equations {x2i + xi }i . to ease the notation, our work environment will therefore be the quotient ring r = f2 [x1 , . . . , xn ]/(x21 +x1 , . . . , x2n +xn ). moreover, +most of our results do not depend on the number n of variables, and when +not otherwise specified we consider r to be defined in infinitely many variables. we denote with x the set of our variables. +in this work we characterise the vanishing ideal it of the set of binary vectors +contained in the hamming sphere of radius t − 1. this characterisation corresponds to the explicit construction of the square-free polynomial φt whose +roots are exactly the set of points of weight at most t−1. it is worth mentioning that this polynomial corresponds to the algebraic normal form (anf) of +preprint submitted to elsevier",0 +"abstract: existing mathematical theory interprets the concept of standard deviation as the +dispersion degree. therefore, in measurement theory, both uncertainty concept and precision +concept, which are expressed with standard deviation or times standard deviation, are also defined +as the dispersion of measurement result, so that the concept logic is tangled. through comparative +analysis of the standard deviation concept and re-interpreting the measurement error evaluation +principle, this paper points out that the concept of standard deviation is actually single error’s +probability interval value instead of dispersion degree, and that the error with any regularity can be +evaluated by standard deviation, corrected this mathematical concept, and gave the correction +direction of measurement concept logic. these will bring a global change to measurement theory +system. +keywords: measurement error; standard deviation; variance; covariance; probability theory",10 +"abstract +in this paper, we propose a very concise deep learning approach for +collaborative filtering that jointly models distributional representation for users and +items. the proposed framework obtains better performance when compared against +current state-of-art algorithms and that made the distributional representation +model a promising direction for further research in the collaborative filtering.",9 +"abstract—the virtual network embedding problem (vnep) +captures the essence of many resource allocation problems of +today’s cloud providers, which offer their physical computation and networking resources to customers. customers request +resources in the form of virtual networks, i.e. as a directed +graph, specifying computational requirements at the nodes and +bandwidth requirements on the edges. an embedding of a +virtual network on the shared physical infrastructure is the joint +mapping of (virtual) nodes to suitable physical servers together +with the mapping of (virtual) edges onto paths in the physical +network connecting the respective servers. we study the offline +setting of the vnep in which multiple requests are given and +the task is to find the most profitable set of requests to embed +while not exceeding the physical resource capacities. +this paper initiates the study of approximation algorithms +for the vnep by employing randomized rounding of linear +programming solutions. we show that the standard linear +programming formulation exhibits an inherent structural deficit, +yielding large (or even infinite) integrality gaps. in turn, focusing +on the class of cactus graphs for virtual networks, we devise +a novel linear programming formulation together with an +algorithm to decompose fractional solutions into convex combinations of valid embeddings. applying randomized rounding, +we obtain the first tri-criteria approximation algorithm in the +classic resource augmentation model.",8 +"abstract. let a → b be a homomorphism of commutative rings. the squaring operation is a functor sqb/a from the derived category d(b) of complexes +of b-modules into itself. this operation is needed for the definition of rigid +complexes (in the sense of van den bergh), that in turn leads to a new approach +to grothendieck duality for rings, schemes and even dm stacks. +in our paper with j.j. zhang from 2008 we introduced the squaring operation, and explored some of its properties. unfortunately some of the proofs +in that paper had severe gaps in them. +in the present paper we reproduce the construction of the squaring operation. this is done in a more general context than in the first paper: here we +consider a homomorphism a → b of commutative dg rings. our first main +result is that the square sqb/a (m ) of a dg b-module m is independent of +the resolutions used to present it. our second main result is on the trace functoriality of the squaring operation. we give precise statements and complete +correct proofs. +in a subsequent paper we will reproduce the remaining parts of the 2008 +paper that require fixing. this will allow us to proceed with the other papers, +mentioned in the bibliography, on the rigid approach to grothendieck duality. +the proofs of the main results require a substantial amount of foundational +work on commutative and noncommutative dg rings, including a study of +semi-free dg rings, their lifting properties, and their homotopies. this part +of the paper could be of independent interest.",0 +"abstract +the area of computation called artificial intelligence (ai) is falsified by describing a previous 1972 +falsification of ai by british applied mathematician james lighthill. it is explained how lighthill’s +arguments continue to apply to current ai. it is argued that ai should use the popperian scientific method +in which it is the duty of every scientist to attempt to falsify theories and if theories are falsified to replace +or modify them. the paper describes the popperian method in detail and discusses paul nurse’s application +of the method to cell biology that also involves questions of mechanism and behavior. arguments used by +lighthill in his original 1972 report that falsifed ai are discussed. the lighthill arguments are then shown +to apply to current ai. the argument uses recent scholarship to explain lighthill’s assumptions and to +show how the arguments based on those assumptions continue to falsify modern ai. an iimportant focus +of the argument involves hilbert’s philosophical programme that defined knowledge and truth as provable +formal sentences. current ai takes the hilbert programme as dogma beyond criticism while lighthill as a +mid 20th century applied mathematician had abandoned it. the paper uses recent scholarship to explain +john von neumann’s criticism of ai that i claim was assumed by lighthill. the paper discusses computer +chess programs to show lighthill’s combinatorial explosion still applies to ai but not humans. an +argument showing that turing machines (tm) are not the correct description of computation is given. the +paper concludes by advocating studying computation as peter naur’s dataology.",2 +"abstract. we give a maximal independent set (mis) algorithm that runs in o(log log ∆) rounds +in the congested clique model, where ∆ is the maximum degree of the input graph. this improves +log ∆ +√ +upon the o( log(∆)·log ++ log log ∆) rounds algorithm of [ghaffari, podc ’17], where n is the +log n +number of vertices of the input graph. +in the first stage of our algorithm, we simulate the first o( polynlog n ) iterations of the sequential +random order greedy algorithm for mis in the congested clique model in o(log log ∆) rounds. this +thins out the input graph relatively quickly: after this stage, the maximum degree of the residual +graph is poly-logarithmic. in the second stage, we run the mis algorithm of [ghaffari, podc ’17] +on the residual graph, which completes in o(log log ∆) rounds on graphs of poly-logarithmic degree.",8 +"abstract +analytical computation methods are proposed for evaluating the minimum dwell time and average dwell time guaranteeing the asymptotic +stability of a discrete-time switched linear system whose switchings are +assumed to respect a given directed graph. the minimum and average +dwell time can be found using the graph that governs the switchings, and +the associated weights. this approach, which is used in a previous work +for continuous-time systems having non-defective subsystems, has been +adapted to discrete-time switched systems and generalized to allow defective subsystems. moreover, we present a method to improve the dwell +time estimation in the case of bimodal switched systems. in this method, +scaling algorithms to minimize the condition number are used to give +better minimum dwell time and average dwell time estimates. +keywords: switched systems, minimum dwell time, average dwell +time, optimum cycle ratio, asymptotic stability, switching graph.",3 +"abstract – premature convergence is one of the important +issues while using genetic programming for data modeling. it +can be avoided by improving population diversity. intelligent +genetic operators can help to improve the population diversity. +crossover is an important operator in genetic programming. +so, we have analyzed number of intelligent crossover operators +and proposed an algorithm with the modification of soft brood +crossover operator. it will help to improve the population +diversity and reduce the premature convergence. we have +performed experiments on three different symbolic regression +problems. then we made the performance comparison of our +proposed crossover (modified soft brood crossover) with the +existing soft brood crossover and subtree crossover operators. +index terms – intelligent crossover, genetic programming, +soft brood crossover",9 +"abstract—1-bit digital-to-analog (dacs) and analog-to-digital +converters (adcs) are gaining more interest in massive mimo +systems for economical and computational efficiency. we present +a new precoding technique to mitigate the inter-user-interference +(iui) and the channel distortions in a 1-bit downlink mumiso system with qpsk symbols. the transmit signal vector is +optimized taking into account the 1-bit quantization. we develop +a sort of mapping based on a look-up table (lut) between the +input signal and the transmit signal. the lut is updated for +each channel realization. simulation results show a significant +gain in terms of the uncoded bit-error-ratio (ber) compared to +the existing linear precoding techniques.",7 +"abstract +evolutionary algorithms are well suited for solving the knapsack problem. some empirical studies claim that evolutionary +algorithms can produce good solutions to the 0-1 knapsack problem. nonetheless, few rigorous investigations address the quality +of solutions that evolutionary algorithms may produce for the knapsack problem. the current paper focuses on a theoretical +investigation of three types of (n+1) evolutionary algorithms that exploit bitwise mutation, truncation selection, plus different +repair methods for the 0-1 knapsack problem. it assesses the solution quality in terms of the approximation ratio. our work +indicates that the solution produced by pure strategy and mixed strategy evolutionary algorithms is arbitrarily bad. nevertheless, +the evolutionary algorithm using helper objectives may produce 1/2-approximation solutions to the 0-1 knapsack problem. +index terms +evolutionary algorithm, approximation algorithm, knapsack problem, solution quality",9 +"abstract +school bus planning is usually divided into routing and scheduling due to the complexity of +solving them concurrently. however, the separation between these two steps may lead to worse +solutions with higher overall costs than that from solving them together. when finding the +minimal number of trips in the routing problem, neglecting the importance of trip compatibility +may increase the number of buses actually needed in the scheduling problem. this paper +proposes a new formulation for the multi-school homogeneous fleet routing problem that +maximizes trip compatibility while minimizing total travel time. this incorporates the trip +compatibility for the scheduling problem in the routing problem. since the problem is inherently +just a routing problem, finding a good solution is not cumbersome. to compare the performance +of the model with traditional routing problems, we generate eight mid-size data sets. through +importing the generated trips of the routing problems into the bus scheduling (blocking) problem, +it is shown that the proposed model uses up to 13% fewer buses than the common traditional +routing models. +keywords: school bus routing, trip compatibility, school bus scheduling, bus blocking",2 +"abstract. i describe an approach to compiling common idioms in r +code directly to native machine code and illustrate it with several examples. not only can this yield significant performance gains, but it +allows us to use new approaches to computing in r. importantly, the +compilation requires no changes to r itself, but is done entirely via r +packages. this allows others to experiment with different compilation +strategies and even to define new domain-specific languages within r. +we use the low-level virtual machine (llvm ) compiler toolkit to +create the native code and perform sophisticated optimizations on the +code. by adopting this widely used software within r, we leverage +its ability to generate code for different platforms such as cpus and +gpus, and will continue to benefit from its ongoing development. this +approach potentially allows us to develop high-level r code that is also +fast, that can be compiled to work with different data representations +and sources, and that could even be run outside of r. the approach +aims to both provide a compiler for a limited subset of the r language +and also to enable r programmers to write other compilers. this is +another approach to help us write high-level descriptions of what we +want to compute, not how. +key words and phrases: programming language, efficient computation, compilation, extensible compiler toolkit. +ploiting, and coming to terms with, technologies for +parallel computing including shared and nonshared +computing with data is in a very interesting pemulti-core processors and gpus (graphics processriod at present and this has significant implications +ing units). these challenge us to innovate and sigfor how we choose to go forward with our computnificantly enhance our existing computing platforms +ing platforms and education in statistics and related +and to develop new languages and systems so that +fields. we are simultaneously (i) leveraging higher- we are able to meet not just tomorrow’s needs, but +level, interpreted languages such as r, matlab, those of the next decade. +python and recently julia, (ii) dealing with increasstatisticians play an important role in the “big +ing volume and complexity of data, and (iii) ex- data” surge, and therefore must pay attention to +logistical and performance details of statistical comduncan temple lang is associate professor, +putations that we could previously ignore. we need +department of statistics, university of california at +to think about how best to meet our own computdavis, 4210 math sciences building, davis, california +ing needs for the near future and also how to best be +95616, usa e-mail: duncan@r-project.org. +able to participate in multi-disciplinary efforts that +require serious computing involving statistical ideas +this is an electronic reprint of the original article +published by the institute of mathematical statistics in and methods. are we best served with our own computing platform such as r (r core team (2013))? +statistical science, 2014, vol. 29, no. 2, 181–200. this +do we need our own system? can we afford the luxreprint differs from the original in pagination and +typographic detail. +ury of our own system, given the limited resources +1. background & motivation",6 +"abstract +clustering analysis plays an important role in scientific research +and commercial application. k-means algorithm is a widely +used partition method in clustering. however, it is known that +the k-means algorithm may get stuck at suboptimal solutions, +depending on the choice of the initial cluster centers. in this +article, we propose a technique to handle large scale data, which +can select initial clustering center purposefully using genetic +algorithms (gas), reduce the sensitivity to isolated point, avoid +dissevering big cluster, and overcome deflexion of data in some +degree that caused by the disproportion in data partitioning +owing to adoption of multi-sampling. +we applied our method to some public datasets these show the +advantages of the proposed approach for example hepatitis c +dataset that has been taken from the machine learning +warehouse of university of california. our aim is to evaluate +hepatitis dataset. in order to evaluate this dataset we did some +preprocessing operation, the reason to preprocessing is to +summarize the data in the best and suitable way for our +algorithm. missing values of the instances are adjusted using +local mean method.",9 +"abstract—the problem of multi-area interchange scheduling +in the presence of stochastic generation and load is considered. +a new interchange scheduling technique based on a two-stage +stochastic minimization of overall expected operating cost is +proposed. because directly solving the stochastic optimization is +intractable, an equivalent problem that maximizes the expected +social welfare is formulated. the proposed technique leverages +the operator’s capability of forecasting locational marginal prices +(lmps) and obtains the optimal interchange schedule without +iterations among operators. +index terms—inter-regional interchange scheduling, multiarea economic dispatch, seams issue.",3 +"abstract. we define the parametric closure problem, in which the input is a partially ordered set whose +elements have linearly varying weights and the goal is to compute the sequence of minimum-weight +downsets of the partial order as the weights vary. we give polynomial time solutions to many important +special cases of this problem including semiorders, reachability orders of bounded-treewidth graphs, +partial orders of bounded width, and series-parallel partial orders. our result for series-parallel orders +provides a significant generalization of a previous result of carlson and eppstein on bicriterion subtree +problems.",8 +"abstract +for many compiled languages, source-level types are erased +very early in the compilation process. as a result, further compiler passes may convert type-safe source into type-unsafe +machine code. type-unsafe idioms in the original source and +type-unsafe optimizations mean that type information in a +stripped binary is essentially nonexistent. the problem of recovering high-level types by performing type inference over +stripped machine code is called type reconstruction, and offers a useful capability in support of reverse engineering and +decompilation. +in this paper, we motivate and develop a novel type system and algorithm for machine-code type inference. the +features of this type system were developed by surveying a +wide collection of common source- and machine-code idioms, +building a catalog of challenging cases for type reconstruction. we found that these idioms place a sophisticated set +of requirements on the type system, inducing features such +as recursively-constrained polymorphic types. many of the +features we identify are often seen only in expressive and powerful type systems used by high-level functional languages. +using these type-system features as a guideline, we have +developed retypd: a novel static type-inference algorithm for +machine code that supports recursive types, polymorphism, +and subtyping. retypd yields more accurate inferred types +than existing algorithms, while also enabling new capabilities +∗ this",6 +"abstract +we simulate the self-propulsion of devices in a fluid in the regime of low +reynolds numbers. each device consists of three bodies (spheres or capsules) connected with two damped harmonic springs. sinusoidal driving +forces compress the springs which are resolved within a rigid body physics +engine. the latter is consistently coupled to a 3d lattice boltzmann framework for the fluid dynamics. in simulations of three-sphere devices, we find +that the propulsion velocity agrees well with theoretical predictions. in simulations where some or all spheres are replaced by capsules, we find that the +asymmetry of the design strongly affects the propelling efficiency. +keywords: stokes flow, self-propelled microorganism, lattice boltzmann +method, numerical simulation +1. introduction +engineered micro-devices, developed in such a way that they are able to +move alone through a fluid and, simultaneously, emit a signal, can be of cru∗",5 +"abstract +we study the problem of testing identity against a given distribution (a.k.a. goodness-of-fit) with a +focus on the high confidence regime. more precisely, given samples from an unknown distribution p +over n elements, an explicitly given distribution q, and parameters 0 < ε, δ < 1, we wish to distinguish, +with probability at least 1 − δ, whether the distributions are identical versus ε-far in total variation (or +statistical) distance. existing work has focused on the constant confidence regime, +√ i.e., the case that +δ = ω(1), for which the sample complexity of identity testing is known to be θ( n/ε2 ). +typical applications of distribution property testing require small values of the confidence parameter +δ (which correspond to small “p-values” in the statistical hypothesis testing terminology). prior work +achieved arbitrarily small values of δ via black-box amplification, which multiplies the required number +of samples by θ(log(1/δ)). we show that this upper bound is suboptimal for any δ = o(1), and give a +new identity tester that achieves the optimal sample complexity. our new upper and lower bounds show +that the optimal sample complexity of identity testing is +  + +1 p +θ 2 +n log(1/δ) + log(1/δ) +ε +for any n, ε, and δ. for the special case of uniformity testing, where the given distribution is the uniform +distribution un over the domain, our new tester is surprisingly simple: to test whether p = un versus +dtv (p, un ) ≥ ε, we simply threshold dtv (b +p, un ), where pb is the empirical probability distribution. we +believe that our novel analysis techniques may be useful for other distribution testing problems as well.",8 +"abstract—linear precoding has been widely studied in the context of massive multiple-input-multiple-output (mimo) together +with two common power normalization techniques, namely, +matrix normalization (mn) and vector normalization (vn). +despite this, their effect on the performance of massive mimo +systems has not been thoroughly studied yet. the aim of this +paper is to fulfill this gap by using large system analysis. +considering a system model that accounts for channel estimation, +pilot contamination, arbitrary pathloss, and per-user channel +correlation, we compute tight approximations for the signal-tointerference-plus-noise ratio and the rate of each user equipment +in the system while employing maximum ratio transmission +(mrt), zero forcing (zf), and regularized zf precoding under +both mn and vn techniques. such approximations are used +to analytically reveal how the choice of power normalization +affects the performance of mrt and zf under uncorrelated +fading channels. it turns out that zf with vn resembles a +sum rate maximizer while it provides a notion of fairness under +mn. numerical results are used to validate the accuracy of the +asymptotic analysis and to show that in massive mimo, noncoherent interference and noise, rather than pilot contamination, +are often the major limiting factors of the considered precoding +schemes. +index terms—massive mimo, linear precoding, power normalization techniques, large system analysis, pilot contamination.",7 +abstract,5 +"abstract +mixed-integer mathematical programs are among the most commonly used models for a wide set of +problems in operations research and related fields. however, there is still very little known about what +can be expressed by small mixed-integer programs. in particular, prior to this work, it was open whether +some classical problems, like the minimum odd-cut problem, can be expressed by a compact mixedinteger program with few (even constantly many) integer variables. this is in stark contrast to linear +formulations, where recent breakthroughs in the field of extended formulations have shown that many +polytopes associated to classical combinatorial optimization problems do not even admit approximate +extended formulations of sub-exponential size. +we provide a general framework for lifting inapproximability results of extended formulations to the +setting of mixed-integer extended formulations, and obtain almost tight lower bounds on the number of +integer variables needed to describe a variety of classical combinatorial optimization problems. among +the implications we obtain, we show that any mixed-integer extended formulation of sub-exponential +size for the matching polytope, cut polytope, traveling salesman polytope or dominant of the odd-cut +polytope, needs ω(n/ log n) many integer variables, where n is the number of vertices of the underlying +graph. conversely, the above-mentioned polyhedra admit polynomial-size mixed-integer formulations +with only o(n) or o(n log n) (for the traveling salesman polytope) many integer variables. +our results build upon a new decomposition technique that, for any convex set c, allows for approximating any mixed-integer description of c by the intersection of c with the union of a small number of +affine subspaces. +keywords: extension complexity, mixed-integer programs, extended formulations",8 +"abstract. in this paper we present grammatic – a tool for textual +syntax definition. grammatic serves as a front-end for parser generators +(and other tools) and brings modularity and reuse to their development +artifacts. it adapts techniques for separation of concerns from apsectoriented programming to grammars and uses templates for grammar +reuse. we illustrate usage of grammatic by describing a case study: +bringing separation of concerns to antlr parser generator, which is +achieved without a common time- and memory-consuming technique of +building an ast to separate semantic actions from a grammar definition.",6 +"abstract. let (r, m) be a relative cohen-macaulay local ring with respect to an ideal +a of r and set c := ht a. in this paper, we investigate some properties of the matlis +dual hca (r)∨ of the r-module hca (r) and we show that such modules treat like canonical +modules over cohen-macaulay local rings. also, we provide some duality and equivalence results with respect to the module hca (r)∨ and so these results lead to achieve +generalizations of some known results, such as the local duality theorem, which have +been provided over a cohen-macaulay local ring which admits a canonical module.",0 +"abstract probability of the observation and is an elementary observer itself. +since information initially originates in quantum process with conjugated probabilities, its study should focus not on +physics of observing process‘ interacting particles but on its information-theoretical essence. +the approach substantiates every step of the origin through the unified formalism of mathematics and logic. +such formalism allows understand and describe the regularity (law) of these informational processes. +preexisting physical law is irrelevant to the emerging regularities in this approach. +the approach initial points are: +1. interaction of the objects or particles is primary indicator of their origin. the field of probability is source of +information and physics. the interactions are abstract ―yes-no‖ actions of an impulse, probabilistic or real. +3",7 +"abstract +controlling resource usage in distributed systems is a challenging task given the dynamics +involved in access granting. consider, for instance, the setting of floating licenses where access +can be granted if the request originates in a licensed domain and the number of active users +is within the license limits, and where licenses can be interchanged. access granting in such +scenarios is given in terms of floating authorizations, addressed in this paper as first class entities +of a process calculus model, encompassing the notions of domain, accounting and delegation. +we present the operational semantics of the model in two equivalent alternative ways, each +informing on the specific nature of authorizations. we also introduce a typing discipline to +single out systems that never get stuck due to lacking authorizations, addressing configurations +where authorization assignment is not statically prescribed in the system specification.",6 +"abstract +the purpose of this paper is to construct confidence intervals for the regression coefficients in the +fine-gray model for competing risks data with random censoring, where the number of covariates +can be larger than the sample size. despite strong motivation from biostatistics applications, highdimensional fine-gray model has attracted relatively little attention among the methodological or +theoretical literatures. we fill in this blank by proposing first a consistent regularized estimator +and then the confidence intervals based on the one-step bias-correcting estimator. we are able +to generalize the partial likelihood approach for the fine-gray model under random censoring +despite many technical difficulties. we lay down a methodological and theoretical framework for +the one-step bias-correcting estimator with the partial likelihood, which does not have independent +and identically distributed entries. we also handle for our theory the approximation error from +the inverse probability weighting (ipw), proposing novel concentration results for time dependent +processes. in addition to the theoretical results and algorithms, we present extensive numerical +experiments and an application to a study of non-cancer mortality among prostate cancer patients +using the linked medicare-seer data. +key words: p-values, survival analysis, high-dimensional inference, one-step estimator.",10 +"abstract +visual question answering (or vqa) is a +new and exciting problem that combines +natural language processing and computer +vision techniques. we present a survey +of the various datasets and models that +have been used to tackle this task. the +first part of this survey details the various datasets for vqa and compares them +along some common factors. the second part of this survey details the different +approaches for vqa, classified into four +types: non-deep learning models, deep +learning models without attention, deep +learning models with attention, and other +models which do not fit into the first three. +finally, we compare the performances of +these approaches and provide some directions for future work.",2 +abstract,1 +"abstract +this paper presents minimax rates for density estimation when the data dimension d is allowed +to grow with the number of observations n rather than remaining fixed as in previous analyses. we +prove a non-asymptotic lower bound which gives the worst-case rate over standard classes of smooth +densities, and we show that kernel density estimators achieve this rate. we also give oracle choices for +the bandwidth and derive the fastest rate d can grow with n to maintain estimation consistency.",10 +"abstract. daligault, rao and thomassé asked whether every hereditary +graph class that is well-quasi-ordered by the induced subgraph relation has +bounded clique-width. lozin, razgon and zamaraev (jctb 2017+) gave a negative answer to this question, but their counterexample is a class that can only +be characterised by infinitely many forbidden induced subgraphs. this raises +the issue of whether the question has a positive answer for finitely defined hereditary graph classes. apart from two stubborn cases, this has been confirmed +when at most two induced subgraphs h1 , h2 are forbidden. we confirm it for +one of the two stubborn cases, namely for the (h1 , h2 ) = (triangle, p2 + p4 ) +case, by proving that the class of (triangle, p2 + p4 )-free graphs has bounded +clique-width and is well-quasi-ordered. our technique is based on a special decomposition of 3-partite graphs. we also use this technique to prove that the +class of (triangle, p1 + p5 )-free graphs, which is known to have bounded cliquewidth, is well-quasi-ordered. our results enable us to complete the classification +of graphs h for which the class of (triangle, h)-free graphs is well-quasi-ordered.",8 +"abstract. we present the macaulay2 package numericalimplicitization, which allows for user-friendly computation of the basic invariants of the image of a polynomial map, such as dimension, degree, and hilbert function values. +this package relies on methods of numerical algebraic geometry, such as homotopy continuation and monodromy.",0 +"abstract. in this paper we give necessary and sufficient conditions for the +cohen-macaulayness of the tangent cone of a monomial curve in the 4-dimensional +affine space. we study particularly the case where c is a gorenstein noncomplete intersection monomial curve.",0 +"abstract +a hallmark of human intelligence is the ability to ask rich, creative, and revealing +questions. here we introduce a cognitive model capable of constructing humanlike questions. our approach treats questions as formal programs that, when executed on the state of the world, output an answer. the model specifies a probability +distribution over a complex, compositional space of programs, favoring concise +programs that help the agent learn in the current context. we evaluate our approach by modeling the types of open-ended questions generated by humans who +were attempting to learn about an ambiguous situation in a game. we find that our +model predicts what questions people will ask, and can creatively produce novel +questions that were not present in the training set. in addition, we compare a number of model variants, finding that both question informativeness and complexity +are important for producing human-like questions.",2 +"abstract sources,” arxiv preprint arxiv:1707.09567, july +2017.",7 +"abstract—in the area of computer vision, deep learning has +produced a variety of state-of-the-art models that rely on massive +labeled data. however, collecting and annotating images from +the real world has a great demand for labor and money +investments and is usually too passive to build datasets with +specific characteristics, such as small area of objects and high +occlusion level. under the framework of parallel vision, this +paper presents a purposeful way to design artificial scenes and +automatically generate virtual images with precise annotations. +a virtual dataset named paralleleye is built, which can be used +for several computer vision tasks. then, by training the dpm +(deformable parts model) and faster r-cnn detectors, we prove +that the performance of models can be significantly improved +by combining paralleleye with publicly available real-world +datasets during the training phase. in addition, we investigate +the potential of testing the trained models from a specific aspect +using intentionally designed virtual datasets, in order to discover +the flaws of trained models. from the experimental results, we +conclude that our virtual dataset is viable to train and test the +object detectors. +index terms—parallel vision, virtual dataset, object detection, +deep learning.",1 +"abstract. we propose a fragment of many-sorted second order logic +esmt and show that checking satisfiability of sentences in this fragment +is decidable. this logic has an ∃∗ ∀∗ quantifier prefix that is conducive +to modeling synthesis problems. moreover, it allows reasoning using a +combination of background theories provided that they have a decidable +satisfiability problem for the ∃∗ ∀∗ fo-fragment (e.g., linear arithmetic). +our decision procedure reduces the satisfiability of esmt formulae to +satisfiability queries of the background theories, allowing us to use existing efficient smt solvers for these theories; hence our procedure can be +seen as effectively smt (esmt) reasoning. +keywords: second order logic, synthesis, decidable fragment",6 +"abstract. functions between groups with the property that all function conjugates are inverse preserving are called sandwich morphisms. these maps preserve a structure within the group known as the sandwich structure. sandwich +structures are left distributive idempotent left involutary magmas. these provide a generalisation of groups which we call a sandwich. this paper explores +sandwiches and their relationship to groups.",4 +"abstract. based on the structure theory of pairs of skew-symmetric matrices, +we give a conjecture for the hilbert series of the exterior algebra modulo the +ideal generated by two generic quadratic forms. we show that the conjectured +series is an upper bound in the coefficient-wise sense, and we determine a +majority of the coefficients. we also conjecture that the series is equal to the +series of the squarefree polynomial ring modulo the ideal generated by the +squares of two generic linear forms.",0 +"abstract +background: frameshift translation is an important phenomenon that +contributes to the appearance of novel coding dna sequences (cds) and +functions in gene evolution, by allowing alternative amino acid translations of +gene coding regions. +frameshift translations can be identified by aligning two cds, from a same +gene or from homologous genes, while accounting for their codon structure. two +main classes of algorithms have been proposed to solve the problem of aligning +cds, either by amino acid sequence alignment back-translation, or by +simultaneously accounting for the nucleotide and amino acid levels. the former +does not allow to account for frameshift translations and up to now, the latter +exclusively accounts for frameshift translation initiation, not considering the +length of the translation disruption caused by a frameshift. +results: we introduce a new scoring scheme with an algorithm for the pairwise +alignment of cds accounting for frameshift translation initiation and length, +while simultaneously considering nucleotide and amino acid sequences. the main +specificity of the scoring scheme is the introduction of a penalty cost accounting +for frameshift extension length to compute an adequate similarity score for a cds +alignment. the second specificity of the model is that the search space of the +problem solved is the set of all feasible alignments between two cds. previous +approaches have considered restricted search space or additional constraints on +the decomposition of an alignment into length-3 sub-alignments. the algorithm +described in this paper has the same asymptotic time complexity as the classical +needleman-wunsch algorithm. +conclusions: we compare the method to other cds alignment methods based +on an application to the comparison of pairs of cds from homologous human, +mouse and cow genes of ten mammalian gene families from the +ensembl-compara database. the results show that our method is particularly +robust to parameter changes as compared to existing methods. it also appears to +be a good compromise, performing well both in the presence and absence of +frameshift translations. an implementation of the method is available at +https://github.com/udes-cobius/fsepsa. +keywords: coding dna sequences; pairwise alignment; frameshifts; dynamic +programming.",8 +"abstract— this paper presents an adaptive high performance +control method for autonomous miniature race cars. racing +dynamics are notoriously hard to model from first principles, +which is addressed by means of a cautious nonlinear model +predictive control (nmpc) approach that learns to improve +its dynamics model from data and safely increases racing +performance. the approach makes use of a gaussian process +(gp) and takes residual model uncertainty into account through +a chance constrained formulation. we present a sparse gp +approximation with dynamically adjusting inducing inputs, +enabling a real-time implementable controller. the formulation +is demonstrated in simulations, which show significant improvement with respect to both lap time and constraint satisfaction +compared to an nmpc without model learning.",3 +"abstract—a key issue in the control of distributed discrete +systems modeled as markov decisions processes, is that often the +state of the system is not directly observable at any single location +in the system. the participants in the control scheme must share +information with one another regarding the state of the system +in order to collectively make informed control decisions, but this +information sharing can be costly. harnessing recent results from +information theory regarding distributed function computation, +in this paper we derive, for several information sharing model +structures, the minimum amount of control information that must +be exchanged to enable local participants to derive the same control decisions as an imaginary omniscient controller having full +knowledge of the global state. incorporating consideration for this +amount of information that must be exchanged into the reward +enables one to trade the competing objectives of minimizing this +control information exchange and maximizing the performance +of the controller. an alternating optimization framework is then +provided to help find the efficient controllers and messaging +schemes. a series of running examples from wireless resource +allocation illustrate the ideas and design tradeoffs.",3 +"abstract +we prove that if a group g = ab is the mutually permutable product of the supersoluble subgroups a and b, then the supersoluble residual of g coincides with the +nilpotent residual of the derived subgroup g′ . +keywords: finite groups, supersoluble subgroup, mutually permutable product. +msc2010: 20d20, 20e34",4 +"abstract. a century ago, camille jordan proved that the complex general linear +group gln (c) has the jordan property: there is a jordan constant cn such that every +finite subgroup h ≤ gln (c) has an abelian subgroup h1 of index [h : h1 ] ≤ cn . we +show that every connected algebraic group g (which is not necessarily linear) has the +jordan property with the jordan constant depending only on dim g, and that the full +automorphism group aut(x) of every projective variety x has the jordan property.",4 +"abstract +this thesis is concerned with studying the properties of gradings on several examples of cluster algebras, primarily of infinite type. we start by considering +two classes of finite type cluster algebras: those of type bn and cn . we give the +number of cluster variables of each occurring degree and verify that the grading +is balanced. these results complete a classification in [16] for coefficient-free finite +type cluster algebras. +we then consider gradings on cluster algebras generated by 3×3 skew-symmetric +matrices. we show that the mutation-cyclic matrices give rise to gradings in which +all occurring degrees are positive and have only finitely many associated cluster +variables (excepting one particular case). for the mutation-acyclic matrices, we +prove that all occurring degrees have infinitely many variables and give a direct +proof that the gradings are balanced. +we provide a condition for a graded cluster algebra generated by a quiver to +have infinitely many degrees, based on the presence of a subquiver in its mutation +class. we use this to study the gradings on cluster algebras that are (quantum) +coordinate rings of matrices and grassmannians and show that they contain cluster +variables of all degrees in n. +next we consider the finite list (given in [9]) of mutation-finite quivers that do +not correspond to triangulations of marked surfaces. we show that a(x7 ) has a +grading in which there are only two degrees, with infinitely many cluster variables +e6 , e +e7 and e +e8 have infinitely +in both. we also show that the gradings arising from e +many variables in certain degrees. +finally, we study gradings arising from triangulations of marked bordered 2dimensional surfaces (see [10]). we adapt a definition from [24] to define the +space of valuation functions on such a surface and prove combinatorially that this +space is isomorphic to the space of gradings on the associated cluster algebra. we +illustrate this theory by applying it to a family of examples, namely, the annulus +with n + m marked points. we show that the standard grading is of mixed type, +i",0 +"abstract. feedforward neural networks have wide applicability in various +disciplines of science due to their universal approximation property. some authors have shown that single hidden layer feedforward neural networks (slfns) +with fixed weights still possess the universal approximation property provided +that approximated functions are univariate. but this phenomenon does not +lay any restrictions on the number of neurons in the hidden layer. the more +this number, the more the probability of the considered network to give precise results. in this note, we constructively prove that slfns with the fixed +weight 1 and two neurons in the hidden layer can approximate any continuous +function on a compact subset of the real line. the applicability of this result +is demonstrated in various numerical examples. finally, we show that slfns +with fixed weights cannot approximate all continuous multivariate functions.",7 +"abstract. we consider the problem of inferring temporal specifications from +demonstrations by an agent interacting with an uncertain, stochastic environment. +such specifications are useful for correct-by-construction control of autonomous +systems operating in uncertain environments. some demonstrations may have errors, and the specification inference method must be robust to them. we provide +a novel formulation of the problem as a maximum a posteriori (map) probability +inference problem, and give an efficient approach to solve this problem, demonstrated by case studies inspired by robotics.",2 +"abstract. this paper introduces scavenger, the first theorem prover for +pure first-order logic without equality based on the new conflict resolution +calculus. conflict resolution has a restricted resolution inference rule that +resembles (a first-order generalization of) unit propagation as well as a +rule for assuming decision literals and a rule for deriving new clauses by +(a first-order generalization of) conflict-driven clause learning.",2 +"abstract +we consider a reinforcement learning (rl) setting in which the agent interacts with a +sequence of episodic mdps. at the start of each episode the agent has access to some +side-information or context that determines the dynamics of the mdp for that episode. +our setting is motivated by applications in healthcare where baseline measurements of a +patient at the start of a treatment episode form the context that may provide information +about how the patient might respond to treatment decisions. +we propose algorithms for learning in such contextual markov decision processes +(cmdps) under an assumption that the unobserved mdp parameters vary smoothly with +the observed context. we also give lower and upper pac bounds under the smoothness +assumption. because our lower bound has an exponential dependence on the dimension, we +consider a tractable linear setting where the context is used to create linear combinations +of a finite set of mdps. for the linear setting, we give a pac learning algorithm based on +kwik learning techniques. +keywords: reinforcement learning, pac bounds, kwik learning.",2 +"abstract— in this paper, we develop a robust efficient visual slam system that utilizes heterogeneous point and line +features. by leveraging orb-slam [1], the proposed system +consists of stereo matching, frame tracking, local mapping, +loop detection, and bundle adjustment of both point and line +features. in particular, as the main theoretical contributions +of this paper, we, for the first time, employ the orthonormal +representation as the minimal parameterization to model line +features along with point features in visual slam and analytically derive the jacobians of the re-projection errors with +respect to the line parameters, which significantly improves +the slam solution. the proposed slam has been extensively +tested in both synthetic and real-world experiments whose +results demonstrate that the proposed system outperforms the +state-of-the-art methods in various scenarios.",1 +"abstract +we consider the problem of non-parametric regression with a potentially large number of covariates. we propose a convex, penalized estimation framework that is particularly well-suited for highdimensional sparse additive models. the proposed approach combines appealing features of finite basis +representation and smoothing penalties for non-parametric estimation. in particular, in the case of +additive models, a finite basis representation provides a parsimonious representation for fitted functions but is not adaptive when component functions posses different levels of complexity. on the other +hand, a smoothing spline type penalty on the component functions is adaptive but does not offer a parsimonious representation of the estimated function. the proposed approach simultaneously achieves +parsimony and adaptivity in a computationally efficient framework. we demonstrate these properties +through empirical studies on both real and simulated datasets. we show that our estimator converges +at the minimax rate for functions within a hierarchical class. we further establish minimax rates for a +large class of sparse additive models. the proposed method is implemented using an efficient algorithm +that scales similarly to the lasso with the number of covariates and samples size.",10 +"abstract +a system with artificial intelligence usually relies on symbol manipulation, at least partly and implicitly. however, the interpretation of the symbols – what they represent and what they are about – is +ultimately left to humans, as designers and users of the system. how symbols can acquire meaning for +the system itself, independent of external interpretation, is an unsolved problem. some grounding of +symbols can be obtained by embodiment, that is, by causally connecting symbols (or sub-symbolic variables) to the physical environment, such as in a robot with sensors and effectors. however, a causal +connection as such does not produce representation and aboutness of the kind that symbols have for +humans. here i present a theory that explains how humans and other living organisms have acquired +the capability to have symbols and sub-symbolic variables that represent, refer to, and are about something else. the theory shows how reference can be to physical objects, but also to abstract objects, and +even how it can be misguided (errors in reference) or be about non-existing objects. i subsequently +abstract the primary components of the theory from their biological context, and discuss how and under +what conditions the theory could be implemented in artificial agents. a major component of the theory +is the strong nonlinearity associated with (potentially unlimited) self-reproduction. the latter is likely +not acceptable in artificial systems. it remains unclear if goals other than those inherently serving selfreproduction can have aboutness and if such goals could be stabilized.",9 +"abstractions and a natural separation of protocols +and computations. we describe a reo-to-java compiler and illustrate its use through examples.",6 +"abstract—the realisation of sensing modalities based on the +principles of compressed sensing is often hindered by discrepancies between the mathematical model of its sensing operator, +which is necessary during signal recovery, and its actual physical +implementation, which can amply differ from the assumed +model. in this paper we tackle the bilinear inverse problem of +recovering a sparse input signal and some unknown, unstructured +multiplicative factors affecting the sensors that capture each +compressive measurement. our methodology relies on collecting +a few snapshots under new draws of the sensing operator, and +applying a greedy algorithm based on projected gradient descent +and the principles of iterative hard thresholding. we explore +empirically the sample complexity requirements of this algorithm +by testing its phase transition, and show in a practically relevant +instance of this problem for compressive imaging that the exact +solution can be obtained with only a few snapshots. +index terms—compressed sensing, blind calibration, iterative hard thresholding, non-convex optimisation, bilinear +inverse problems",7 +"abstract +trial-and-error based reinforcement learning +(rl) has seen rapid advancements in recent +times, especially with the advent of deep neural networks. however, the majority of autonomous rl algorithms require a large number of interactions with the environment. a +large number of interactions may be impractical in many real-world applications, such as +robotics, and many practical systems have to +obey limitations in the form of state space +or control constraints. to reduce the number +of system interactions while simultaneously +handling constraints, we propose a modelbased rl framework based on probabilistic +model predictive control (mpc). in particular, we propose to learn a probabilistic transition model using gaussian processes (gps) +to incorporate model uncertainty into longterm predictions, thereby, reducing the impact of model errors. we then use mpc to +find a control sequence that minimises the +expected long-term cost. we provide theoretical guarantees for first-order optimality in +the gp-based transition models with deterministic approximate inference for long-term +planning. we demonstrate that our approach +does not only achieve state-of-the-art data +efficiency, but also is a principled way for rl +in constrained environments.",3 +"abstract +current measures of machine intelligence are either difficult to evaluate or lack the ability to test +a robot’s problem-solving capacity in open worlds. +we propose a novel evaluation framework based on +the formal notion of macgyver test which provides +a practical way for assessing the resilience and resourcefulness of artificial agents.",2 +abstract,9 +"abstract variety +in a complete variety. j. math. kyoto univ. 3 (1963), 89–102.",0 +"abstract—emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating +them on conventional computers, particularly in terms of speed +and energy consumption. however, this usually comes at the +cost of reduced control over the dynamics of the emulated +networks. in this paper, we demonstrate how iterative training +of a hardware-emulated network can compensate for anomalies +induced by the analog substrate. we first convert a deep +neural network trained in software to a spiking network on the +brainscales wafer-scale neuromorphic system, thereby enabling +an acceleration factor of 10 000 compared to the biological +time domain. this mapping is followed by the in-the-loop +training, where in each training step, the network activity is first +recorded in hardware and then used to compute the parameter +updates in software via backpropagation. an essential finding +is that the parameter updates do not have to be precise, but +only need to approximately follow the correct gradient, which +simplifies the computation of updates. using this approach, +after only several tens of iterations, the spiking network shows +an accuracy close to the ideal software-emulated prototype. +the presented techniques show that deep spiking networks +emulated on analog neuromorphic devices can attain good +computational performance despite the inherent variations of +the analog substrate.",9 +"abstract +we investigate the ihara zeta functions of finite schreier graphs γn of the +basilica group. we show that γ1+n is 2 sheeted unramified normal covering +z +. in fact, for any n > 1, r ≥ 1 the +of γn , ∀ n ≥ 1 with galois group +2z +n +graph γn+r is 2 sheeted unramified, non normal covering of γr . in order to +do this we give the definition of the generalized replacement product of +schreier graphs. we also show the corresponding results in zig zag product +of schreier graphs γn with a 4 cycle.",4 +"abstract—state-of-the-art static analysis tools for verifying +finite-precision code compute worst-case absolute error bounds +on numerical errors. these are, however, often not a good estimate of accuracy as they do not take into account the magnitude +of the computed values. relative errors, which compute errors +relative to the value’s magnitude, are thus preferable. while +today’s tools do report relative error bounds, these are merely +computed via absolute errors and thus not necessarily tight or +more informative. furthermore, whenever the computed value +is close to zero on part of the domain, the tools do not report +any relative error estimate at all. surprisingly, the quality of +relative error bounds computed by today’s tools has not been +systematically studied or reported to date. +in this paper, we investigate how state-of-the-art static techniques for computing sound absolute error bounds can be +used, extended and combined for the computation of relative +errors. our experiments on a standard benchmark set show that +computing relative errors directly, as opposed to via absolute +errors, is often beneficial and can provide error estimates up +to six orders of magnitude tighter, i.e. more accurate. we also +show that interval subdivision, another commonly used technique +to reduce over-approximations, has less benefit when computing +relative errors directly, but it can help to alleviate the effects of +the inherent issue of relative error estimates close to zero.",6 +abstract,9 +"abstract. we present a formal framework for repairing infinite-state, +imperative, sequential programs, with (possibly recursive) procedures +and multiple assertions; the framework can generate repaired programs +by modifying the original erroneous program in multiple program locations, and can ensure the readability of the repaired program using +user-defined expression templates; the framework also generates a set of +inductive assertions that serve as a proof of correctness of the repaired +program. as a step toward integrating programmer intent and intuition +in automated program repair, we present a cost-aware formulation — +given a cost function associated with permissible statement modifications, the goal is to ensure that the total program modification cost does +not exceed a given repair budget. as part of our predicate abstractionbased solution framework, we present a sound and complete algorithm +for repair of boolean programs. we have developed a prototype tool +based on smt solving and used it successfully to repair diverse errors in +benchmark c programs.",6 +"abstract— swarm systems constitute a challenging problem +for reinforcement learning (rl) as the algorithm needs to learn +decentralized control policies that can cope with limited local +sensing and communication abilities of the agents. although +there have been recent advances of deep rl algorithms applied +to multi-agent systems, learning communication protocols while +simultaneously learning the behavior of the agents is still +beyond the reach of deep rl algorithms. however, while it +is often difficult to directly define the behavior of the agents, +simple communication protocols can be defined more easily +using prior knowledge about the given task. in this paper, +we propose a number of simple communication protocols that +can be exploited by deep reinforcement learning to find decentralized control policies in a multi-robot swarm environment. +the protocols are based on histograms that encode the local +neighborhood relations of the agents and can also transmit +task-specific information, such as the shortest distance and +direction to a desired target. in our framework, we use an +adaptation of trust region policy optimization to learn complex collaborative tasks, such as formation building, building a +communication link, and pushing an intruder. we evaluate our +findings in a simulated 2d-physics environment, and compare +the implications of different communication protocols.",2 +"abstract +in the modern era, abundant information is easily accessible from various sources, however only +a few of these sources are reliable as they mostly contain unverified contents. we develop a system +to validate the truthfulness of a given statement together with underlying evidence. the proposed +system provides supporting evidence when the statement is tagged as false. our work relies on an +inference method on a knowledge graph (kg) to identify the truthfulness of statements. in order +to extract the evidence of falseness, the proposed algorithm takes into account combined +knowledge from kg and ontologies. the system shows very good results as it provides valid and +concise evidence. the quality of kg plays a role in the performance of the inference method which +explicitly affects the performance of our evidence-extracting algorithm.",2 +"abstract. we use the natural homeomorphism between a regular cw-complex x and its face poset px to establish a canonical +isomorphism between the cellular chain complex of x and the result of applying the poset construction of [cla10] to px . for a +monomial ideal whose free resolution is supported on a regular +cw-complex, this isomorphism allows the free resolution of the +ideal to be realized as a cw-poset resolution. conversely, any +cw-poset resolution of a monomial ideal gives rise to a resolution +supported on a regular cw-complex.",0 +"abstract +a connected path decomposition of a simple graph g is a path decomposition (x1 , . . . , xl ) +such that the subgraph of g induced by x1 ∪ · · · ∪ xi is connected for each i ∈ {1, . . . , l}. the +connected pathwidth of g is then the minimum width over all connected path decompositions +of g. we prove that for each fixed k, the connected pathwidth of any input graph can +be computed in polynomial-time. this answers an open question raised by fedor v. fomin +during the grasta 2017 workshop, since connected pathwidth is equivalent to the connected +(monotone) node search game.",8 +"abstract +we consider interactive algorithms in the pool-based setting, and in the streambased setting. interactive algorithms observe suggested elements (representing actions or +queries), and interactively select some of them and receive responses. pool-based algorithms can select elements at any order, while stream-based algorithms observe elements +in sequence, and can only select elements immediately after observing them. we assume +that the suggested elements are generated independently from some source distribution, +and ask what is the stream size required for emulating a pool algorithm with a given pool +size. we provide algorithms and matching lower bounds for general pool algorithms, +and for utility-based pool algorithms. we further show that a maximal gap between the +two settings exists also in the special case of active learning for binary classification.",10 +"abstract. we study the prime graph question for integral group rings. this question can +be reduced to almost simple groups by a result of kimmerle and konovalov. we prove that +the prime graph question has an affirmative answer for all almost simple groups having a +socle isomorphic to psl(2, pf ) for f ≤ 2, establishing the prime graph question for all groups +where the only non-abelian composition factors are of the aforementioned form. using this, we +determine exactly how far the so-called help method can take us for (almost simple) groups +having an order divisible by at most 4 different primes.",4 +"abstract +various applications involve assigning discrete label values to a collection of objects based on some +pairwise noisy data. due to the discrete—and hence nonconvex—structure of the problem, computing the +optimal assignment (e.g. maximum likelihood assignment) becomes intractable at first sight. this paper +makes progress towards efficient computation by focusing on a concrete joint alignment problem—that +is, the problem of recovering n discrete variables xi ∈ {1, · · · , m}, 1 ≤ i ≤ n given noisy observations +of their modulo differences {xi − xj mod m}. we propose a low-complexity and model-free procedure, +which operates in a lifted space by representing distinct label values in orthogonal directions, and which +attempts to optimize quadratic functions over hypercubes. starting with a first guess computed via a +spectral method, the algorithm successively refines the iterates via projected power iterations. we prove +that for a broad class of statistical models, the proposed projected power method makes no error—and +hence converges to the maximum likelihood estimate—in a suitable regime. numerical experiments have +been carried out on both synthetic and real data to demonstrate the practicality of our algorithm. we +expect this algorithmic framework to be effective for a broad range of discrete assignment problems.",7 +"abstract — brain-inspired learning mechanisms, e.g. spike +timing dependent plasticity (stdp), enable agile and fast on-thefly adaptation capability in a spiking neural network. when +incorporating emerging nanoscale resistive non-volatile memory +(nvm) devices, with ultra-low power consumption and highdensity integration capability, a spiking neural network hardware +would result in several orders of magnitude reduction in energy +consumption at a very small form factor and potentially herald +autonomous learning machines. however, actual memory devices +have shown to be intrinsically binary with stochastic switching, +and thus impede the realization of ideal stdp with continuous +analog values. in this work, a dendritic-inspired processing +architecture is proposed in addition to novel cmos neuron +circuits. the utilization of spike attenuations and delays +transforms the traditionally undesired stochastic behavior of +binary nvms into a useful leverage that enables biologicallyplausible stdp learning. as a result, this work paves a pathway +to adopt practical binary emerging nvm devices in brain-inspired +neuromorphic computing. +index terms— brain-inspired computing, crossbar, +neuromorphic computing, machine learning, memristor, +emerging non-volatile memory, rram, silicon neuron, spiketiming dependent plasticity (stdp), spiking neural network.",9 +"abstract. a theorem proved by dobrinskaya in 2006 shows that there is a +strong connection between the k(π, 1) conjecture for artin groups and the +classifying space of artin monoids. more recently ozornova obtained a different +proof of dobrinskaya’s theorem based on the application of discrete morse +theory to the standard cw model of the classifying space of an artin monoid. +in ozornova’s work there are hints at some deeper connections between the +above-mentioned cw model and the salvetti complex, a cw complex which +arises in the combinatorial study of artin groups. in this work we show that +such connections actually exist, and as a consequence we derive yet another +proof of dobrinskaya’s theorem.",4 +"abstract—the timely provision of traffic sign information +to drivers is essential for the drivers to respond, to ensure +safe driving, and to avoid traffic accidents in a timely manner. we proposed a timely visual recognizability quantitative +evaluation method for traffic signs in large-scale transportation +environments. to achieve this goal, we first address the concept +of a visibility field to reflect the visible distribution of threedimensional (3d) space and construct a traffic sign visibility +evaluation model (vem) to measure the traffic sign’s visibility +for a given viewpoint. then, based on the vem, we proposed the +concept of the visual recognizability field (vrf) to reflect the +visual recognizability distribution in 3d space and established a +visual recognizability evaluation model (vrem) to measure +a traffic sign’s visual recognizability for a given viewpoint. +next, we proposed a traffic sign timely visual recognizability +evaluation model (tstvrem) by combining vrem, the actual +maximum continuous visual recognizable distance, and traffic +big data to measure a traffic signs visual recognizability in +different lanes. finally, we presented an automatic algorithm to +implement the tstvrem model through traffic sign and road +marking detection and classification, traffic sign environment +point cloud segmentation, viewpoints calculation, and tstvrem +model realization. the performance of our method for traffic sign +timely visual recognizability evaluation is tested on three road +point clouds acquired by a mobile laser scanning system (riegl +vmx-450) according to road traffic signs and markings (gb +5768-1999 in china) , showing that our method is feasible and +efficient. +index terms—traffic sign, visibility, visibility field, visual +recognizability field, recognizability, mobile laser scanning, point +clouds.",1 +"abstract—deep learning methods can play a crucial role in +anomaly detection, prediction, and supporting decision making +for applications like personal health-care, pervasive body sensing, +etc. however, current architecture of deep networks suffers the +privacy issue that users need to give out their data to the +model (typically hosted in a server or a cluster on cloud) for +training or prediction. this problem is getting more severe for +those sensitive health-care or medical data (e.g fmri or body +sensors measures like eeg signals). in addition to this, there +is also a security risk of leaking these data during the data +transmission from user to the model (especially when it’s through +internet). targeting at these issues, in this paper we proposed a +new architecture for deep network in which users don’t reveal +their original data to the model. in our method, feed-forward +propagation and data encryption are combined into one process: +we migrate the first layer of deep network to users’ local devices, +and apply the activation functions locally, and then use “dropping +activation output” method to make the output non-invertible. +the resulting approach is able to make model prediction without +accessing users’ sensitive raw data. experiment conducted in this +paper showed that our approach achieves the desirable privacy +protection requirement, and demonstrated several advantages +over the traditional approach with encryption / decryption.",1 +"abstract. in most constraint programming systems, a limited number +of search engines is offered while the programming of user-customized +search algorithms requires low-level efforts, which complicates the deployment of such algorithms. to alleviate this limitation, concepts such +as computation spaces have been developed. computation spaces provide +a coarse-grained restoration mechanism, because they store all information contained in a search tree node. other granularities are possible, and +in this paper we make the case for dynamically adapting the restoration +granularity during search. in order to elucidate programmable restoration granularity, we present restoration as an aspect of a constraint programming system, using the model of aspect-oriented programming. a +proof-of-concept implementation using gecode shows promising results.",6 +"abstract +deep learning approaches have made tremendous +progress in the field of semantic segmentation over the past +few years. however, most current approaches operate in +the 2d image space. direct semantic segmentation of unstructured 3d point clouds is still an open research problem. the recently proposed pointnet architecture presents +an interesting step ahead in that it can operate on unstructured point clouds, achieving encouraging segmentation results. however, it subdivides the input points into a grid of +blocks and processes each such block individually. in this +paper, we investigate the question how such an architecture +can be extended to incorporate larger-scale spatial context. +we build upon pointnet and propose two extensions that +enlarge the receptive field over the 3d scene. we evaluate +the proposed strategies on challenging indoor and outdoor +datasets and show improved results in both scenarios.",1 +"abstract +set-membership estimation is usually formulated in the context of set-valued calculus and no probabilistic calculations are necessary. +in this paper, we show that set-membership estimation can be equivalently formulated in the probabilistic setting by employing sets of +probability measures. inference in set-membership estimation is thus carried out by computing expectations with respect to the updated +set of probability measures p as in the probabilistic case. in particular, it is shown that inference can be performed by solving a particular +semi-infinite linear programming problem, which is a special case of the truncated moment problem in which only the zero-th order +moment is known (i.e., the support). by writing the dual of the above semi-infinite linear programming problem, it is shown that, if the +nonlinearities in the measurement and process equations are polynomial and if the bounding sets for initial state, process and measurement +noises are described by polynomial inequalities, then an approximation of this semi-infinite linear programming problem can efficiently be +obtained by using the theory of sum-of-squares polynomial optimization. we then derive a smart greedy procedure to compute a polytopic +outer-approximation of the true membership-set, by computing the minimum-volume polytope that outer-bounds the set that includes all +the means computed with respect to p. +key words: state estimation; filtering; set-membership estimation; set of probability measures; sum-of-squares polynomials.",3 +"abstract—evolutionary computation methods have been successfully applied to neural networks since two decades ago, while +those methods cannot scale well to the modern deep neural networks due to the complicated architectures and large quantities +of connection weights. in this paper, we propose a new method +using genetic algorithms for evolving the architectures and +connection weight initialization values of a deep convolutional +neural network to address image classification problems. in the +proposed algorithm, an efficient variable-length gene encoding +strategy is designed to represent the different building blocks and +the unpredictable optimal depth in convolutional neural networks. +in addition, a new representation scheme is developed for effectively initializing connection weights of deep convolutional neural +networks, which is expected to avoid networks getting stuck into +local minima which is typically a major issue in the backward +gradient-based optimization. furthermore, a novel fitness evaluation method is proposed to speed up the heuristic search +with substantially less computational resource. the proposed +algorithm is examined and compared with 22 existing algorithms +on nine widely used image classification tasks, including the stateof-the-art methods. the experimental results demonstrate the +remarkable superiority of the proposed algorithm over the stateof-the-art algorithms in terms of classification error rate and the +number of parameters (weights). +index terms—genetic algorithms, convolutional neural network, image classification, deep learning.",9 +"abstract. in this paper we classify the isomorphism classes of four dimensional nilpotent associative algebras over a field f, studying regular subgroups +of the affine group agl4 (f). in particular we provide explicit representatives +for such classes when f is a finite field, the real field r or an algebraically +closed field.",4 +"abstract +this paper attempts a more formal approach to the legibility of text +based programming languages, presenting, with proof, minimum +possible ways of representing structure in text interleaved with +information. this presumes that a minimalist approach is best for +purposes of human readability, data storage and transmission, and +machine evaluation. +several proposals are given for improving the expression of interleaved hierarchical structure. for instance, a single colon can +replace a pair of brackets, and bracket types do not need to be repeated in both opening and closing symbols or words. historic and +customary uses of punctuation symbols guided the chosen form +and nature of the improvements.",6 +"abstract: the multilinear normal distribution is a widely used tool in tensor analysis +of magnetic resonance imaging (mri). diffusion tensor mri provides a statistical +estimate of a symmetric 2nd -order diffusion tensor, for each voxel within an imaging +volume. in this article, tensor elliptical (te) distribution is introduced as an extension to the multilinear normal (mln) distribution. some properties including the +characteristic function and distribution of affine transformations are given. an integral representation connecting densities of te and mln distributions is exhibited +that is used in deriving the expectation of any measurable function of a te variate. +key words and phrases: characteristic generator; inverse laplace transform; stochastic representation; tensor; vectorial operator. +ams classification: primary: 62e15, 60e10 secondary: 53a45, 15a69",10 +"abstract—tracking with a pan-tilt-zoom (ptz) camera has +been a research topic in computer vision for many years. +compared to tracking with a still camera, the images captured +with a ptz camera are highly dynamic in nature because the +camera can perform large motion resulting in quickly changing +capture conditions. furthermore, tracking with a ptz camera +involves camera control to position the camera on the target. for +successful tracking and camera control, the tracker must be fast +enough, or has to be able to predict accurately the next position +of the target. therefore, standard benchmarks do not allow to +assess properly the quality of a tracker for the ptz scenario. in +this work, we use a virtual ptz framework to evaluate different +tracking algorithms and compare their performances. we also +extend the framework to add target position prediction for the +next frame, accounting for camera motion and processing delays. +by doing this, we can assess if predicting can make long-term +tracking more robust as it may help slower algorithms for keeping +the target in the field of view of the camera. results confirm that +both speed and robustness are required for tracking under the +ptz scenario. +index terms—pan-tilt-zoom tracking, performance evaluation, tracking algorithms",1 +"abstract +we investigate the problem of language-based image editing (lbie) in this work. given a source +image and a natural language description, we want to generate a target image by editing the source image based on the description. we propose a generic modeling framework for two sub-tasks of lbie: +language-based image segmentation and image colorization. the framework uses recurrent attentive +models to fuse image and language features. instead of using a fixed step size, we introduce for each region of the image a termination gate to dynamically determine in each inference step whether to continue +extrapolating additional information from the textual description. the effectiveness of the framework has +been validated on three datasets. first, we introduce a synthetic dataset, called cosal, to evaluate the +end-to-end performance of our lbie system. second, we show that the framework leads to state-of-theart performance on image segmentation on the referit dataset. third, we present the first language-based +colorization result on the oxford-102 flowers dataset, laying the foundation for future research.",1 +"abstract +we prove an ω(d/ log sw +nd ) lower bound for the average-case cell-probe complexity of deterministic or las vegas randomized algorithms solving approximate near-neighbor (ann) problem in +d-dimensional hamming space in the cell-probe model with w-bit cells, using a table of size s. +this lower bound matches the highest known worst-case cell-probe lower bounds for any static +data structure problems. +this average-case cell-probe lower bound is proved in a general framework which relates the +cell-probe complexity of ann to isoperimetric inequalities in the underlying metric space. a +tighter connection between ann lower bounds and isoperimetric inequalities is established by +a stronger richness lemma proved by cell-sampling techniques.",8 +"abstract +reduced-rank regression is a dimensionality reduction method with many applications. the asymptotic theory for reduced rank estimators of parameter matrices in multivariate linear models has been +studied extensively. in contrast, few theoretical results are available for reduced-rank multivariate generalised linear models. we develop m-estimation theory for concave criterion functions that are maximised +over parameters spaces that are neither convex nor closed. these results are used to derive the consistency +and asymptotic distribution of maximum likelihood estimators in reduced-rank multivariate generalised +linear models, when the response and predictor vectors have a joint distribution. we illustrate our results +in a real data classification problem with binary covariates.",10 +"abstract +recent results by alagic and russell have given some evidence that +the even-mansour cipher may be secure against quantum adversaries +with quantum queries, if considered over other groups than (z/2)n . +this prompts the question as to whether or not other classical schemes +may be generalized to arbitrary groups and whether classical results +still apply to those generalized schemes. +in this paper, we generalize the even-mansour cipher and the feistel cipher. we show that even and mansour’s original notions of secrecy are obtained on a one-key, group variant of the even-mansour +cipher. we generalize the result by kilian and rogaway, that the +even-mansour cipher is pseudorandom, to super pseudorandomness, +also in the one-key, group case. using a slide attack we match the +bound found above. after generalizing the feistel cipher to arbitrary +groups we resolve an open problem of patel, ramzan, and sundaram +by showing that the 3-round feistel cipher over an arbitrary group is +not super pseudorandom. +finally, we generalize a result by gentry and ramzan showing that +the even-mansour cipher can be implemented using the feistel cipher +as the public permutation. in this last result, we also consider the +one-key case over a group and generalize their bound.",4 +"abstract– the liouville theorem states that bounded holomorphic complex functions +are necessarily constant. holomorphic functions fulfill the socalled cauchy-riemann +(cr) conditions. the cr conditions mean that a complex z-derivative is independent +of the direction. holomorphic functions are ideal for activation functions of complex +neural networks, but the liouville theorem makes them useless. yet recently the use +of hyperbolic numbers, lead to the construction of hyperbolic number neural networks. +we will describe the cauchy-riemann conditions for hyperbolic numbers and show that +there exists a new interesting type of bounded holomorphic functions of hyperbolic +numbers, which are not constant. we give examples of such functions. they therefore +substantially expand the available candidates for holomorphic activation functions for +hyperbolic number neural networks. +keywords: hyperbolic numbers, liouville theorem, cauchy-riemann conditions, +bounded holomorphic functions",9 +"abstract +the social force model is one of the most prominent models of pedestrian dynamics. as such naturally +much discussion and criticism has spawned around it, some of which concerns the existence of oscillations in the +movement of pedestrians. this contribution is investigating under which circumstances, parameter choices, and +model variants oscillations do occur and how this can be prevented. it is shown that oscillations can be excluded +if the model parameters fulfill certain relations. the fact that with some parameter choices oscillations occur +and with some not is exploited to verify a specific computer implementation of the model.",5 +"abstract, +conceptual ideas from ai safety, to bridge the gap +between practical contemporary challenges and +longer term concerns which are of an uncertain +time horizon. in addition to providing concrete +problems for researchers and engineers to tackle, +we hope this discussion will be a useful introduction to the concept of oracle ai for newcomers to +the subject. we state at the outset that within +the context of oracle ai, our analysis is limited +in scope to systems which perform mathematical computation, and not to oracles in general. +nonetheless, considering how little effort has been +directed at the superintelligence control problem, +we are confident that there is low-hanging fruit +in addressing these more general issues which are +awaiting discovery.",2 +"abstract +this is the second part of a two part work in which we prove that for every +finitely generated subgroup γ < out(fn ), either γ is virtually abelian or its +second bounded cohomology hb2 (γ; r) contains an embedding of ℓ1 . here in +part ii we focus on finite lamination subgroups γ — meaning that the set of +all attracting laminations of elements of γ is finite — and on the construction +of hyperbolic actions of those subgroups to which the general theory of part i +is applicable.",4 +"abstract. we propose a type-based resource usage analysis for the π-calculus extended +with resource creation/access primitives. the goal of the resource usage analysis is to +statically check that a program accesses resources such as files and memory in a valid +manner. our type system is an extension of previous behavioral type systems for the πcalculus. it can guarantee the safety property that no invalid access is performed, as well as +the property that necessary accesses (such as the close operation for a file) are eventually +performed unless the program diverges. a sound type inference algorithm for the type +system is also developed to free the programmer from the burden of writing complex type +annotations. based on our algorithm, we have implemented a prototype resource usage +analyzer for the π-calculus. to the authors’ knowledge, this is the first type-based resource +usage analysis that deals with an expressive concurrent language like the π-calculus.",6 +abstract,10 +"abstract. we show that for each positive integer k there exist right-angled +artin groups containing free-by-cyclic subgroups whose monodromy automorphisms grow as nk . as a consequence we produce examples of right-angled +artin groups containing finitely presented subgroups whose dehn functions +grow as nk`2 .",4 +"abstract +we compute a canonical circular-arc representation for a given circular-arc (ca) graph which +implies solving the isomorphism and recognition problem for this class. to accomplish this we +split the class of ca graphs into uniform and non-uniform ones and employ a generalized version +of the argument given by köbler et al. (2013) that has been used to show that the subclass of +helly ca graphs can be canonized in logspace. for uniform ca graphs our approach works +in logspace and in addition to that helly ca graphs are a strict subset of uniform ca graphs. +thus our result is a generalization of the canonization result for helly ca graphs. in the nonuniform case a specific set ω of ambiguous vertices arises. by choosing the parameter k to be the +cardinality of ω this obstacle can be solved by brute force. this leads to an o(k + log n) space +algorithm to compute a canonical representation for non-uniform and therefore all ca graphs. +1998 acm subject classification g.2.2 graph theory +keywords and phrases graph isomorphism, canonical representation, parameterized algorithm",8 +"abstract +in this paper, we consider a scenario where an unmanned aerial vehicle (uav) collects data from +a set of sensors on a straight line. the uav can either cruise or hover while communicating with the +sensors. the objective is to minimize the uav’s total aviation time from a starting point to a destination +while allowing each sensor to successfully upload a certain amount of data using a given amount of +energy. the whole trajectory is divided into non-overlapping data collection intervals, in each of which +one sensor is served by the uav. the data collection intervals, the uav’s navigation speed and the +sensors’ transmit powers are jointly optimized. the formulated aviation time minimization problem is +difficult to solve. we first show that when only one sensor node is present, the sensor’s transmit power +follows a water-filling policy and the uav aviation speed can be found efficiently by bisection search. +then we show that for the general case with multiple sensors, the aviation time minimization problem +can be equivalently reformulated as a dynamic programming (dp) problem. the subproblem involved +in each stage of the dp reduces to handle the case with only one sensor node. numerical results present +insightful behaviors of the uav and the sensors. specifically, it is observed that the uav’s optimal +speed is proportional to the given energy and the inter-sensor distance, but inversely proportional to the +data upload requirement.",7 +"abstract. the growth function is the generating function for sizes of spheres +around the identity in cayley graphs of groups. we present a novel method +to calculate growth functions for automatic groups with normal form recognizing automata that recognize a single normal form for each group element, +and are at most context free in complexity: context free grammars can be +translated into algebraic systems of equations, whose solutions represent generating functions of their corresponding non-terminal symbols. this approach +allows us to seamlessly introduce weightings on the growth function: assign +different or even distinct weights to each of the generators in an underlying +presentation, such that this weighting is reflected in the growth function. we +recover known growth functions for small braid groups, and calculate growth +functions that weight each generator in an automatic presentation of the braid +groups according to their lengths in braid generators.",4 +"abstract. let g be a finite group and cd(g) denote the set of complex irreducible character degrees of g. in this paper, we prove that if g is a finite +group and h is an almost simple group whose socle is mathieu group such that +cd(g) = cd(h), then there exists an abelian subgroup a of g such that g/a +is isomorphic to h. this study is heading towards the study of an extension of +huppert’s conjecture (2000) for almost simple groups.",4 +"abstract +stochastic gradient descent algorithm has been successfully applied on support vector machines (called pegasos) for many classification problems. +in this paper, stochastic gradient descent algorithm is investigated to twin +support vector machines for classification. compared with pegasos, the +proposed stochastic gradient twin support vector machines (sgtsvm) is +insensitive on stochastic sampling for stochastic gradient descent algorithm. +in theory, we prove the convergence of sgtsvm instead of almost sure convergence of pegasos. for uniformly sampling, the approximation between +sgtsvm and twin support vector machines is also given, while pegasos +only has an opportunity to obtain an approximation of support vector machines. in addition, the nonlinear sgtsvm is derived directly from its linear +case. experimental results on both artificial datasets and large scale problems show the stable performance of sgtsvm with a fast learning speed. +keywords: classification, support vector machines, twin support vector +machines, stochastic gradient descent, large scale problem.",1 +"abstract. suppose an orientation preserving action of a finite group +g on the closed surface σg of genus g > 1 extends over the 3-torus t 3 for +some embedding σg ⊂ t 3 . then |g| ≤ 12(g − 1), and this upper bound +12(g − 1) can be achieved for g = n2 + 1, 3n2 + 1, 2n3 + 1, 4n3 + 1, 8n3 + +1, n ∈ z+ . those surfaces in t 3 realizing the maximum symmetries +can be either unknotted or knotted. similar problems in non-orientable +category is also discussed. +connection with minimal surfaces in t 3 is addressed and when the +maximum symmetric surfaces above can be realized by minimal surfaces +is identified.",4 +"abstract. hypergroups are lifted to power semigroups with negation, yielding a method of transferring +results from semigroup theory. this applies to analogous structures such as hypergroups, hyperfields, +and hypermodules, and permits us to transfer the general theory espoused in [19] to the hypertheory.",0 +"abstract—general video game playing (gvgp) aims at +designing an agent that is capable of playing multiple video +games with no human intervention. in 2014, the general video +game ai (gvgai) competition framework was created and +released with the purpose of providing researchers a common +open-source and easy to use platform for testing their ai +methods with potentially infinity of games created using video +game description language (vgdl). the framework has been +expanded into several tracks during the last few years to meet the +demand of different research directions. the agents are required +to either play multiples unknown games with or without access +to game simulations, or to design new game levels or rules. +this survey paper presents the vgdl, the gvgai framework, +existing tracks, and reviews the wide use of gvgai framework +in research, education and competitions five years after its birth. +a future plan of framework improvements is also described.",2 +"abstract diagnosis for tccp using a linear +temporal logic +marco comini, laura titolo +dimi, università degli studi di udine, italy +(e-mail: marco.comini@uniud.it)",6 +"abstract +we prove that every non-trivial valuation on an infinite superrosy field of +positive characteristic has divisible value group and algebraically closed residue +field. in fact, we prove the following more general result. let k be a field such +that for every finite extension l of k and for every natural number n > 0 the +index [l∗ : (l∗ )n ] is finite and, if char(k) = p > 0 and f : l → l is given +by f (x) = xp − x, the index [l+ : f [l]] is also finite. then either there is a +non-trivial definable valuation on k, or every non-trivial valuation on k has +divisible value group and, if char(k) > 0, it has algebraically closed residue +field. in the zero characteristic case, we get some partial results of this kind. +we also notice that minimal fields have the property that every non-trivial +valuation has divisible value group and algebraically closed residue field.",0 +"abstract +medium voltage direct-current based integrated power system is projected as +one of the solutions for powering the all-electric ship. it faces significant challenges for accurately energizing advanced loads, especially the pulsed power +load, which can be rail gun, high power radar, and other state of art equipment. energy storage based on supercapacitors is proposed as a technique +for buffering the direct impact of pulsed power load on the power systems. +however, the high magnitude of charging current of the energy storage can +pose as a disturbance to both distribution and generation systems. this paper presents a fast switching device based real time control system that can +achieve a desired balance between maintaining the required power quality +and fast charging the energy storage in required time. test results are shown +to verify the performance of the proposed control algorithm. +keywords: medium voltage direct-current based integrated power system, +pulsed power load, power quality, disturbance metric, real time control +1. introduction +research related to navy shipboard power system raise a critical concern +regarding to the system stability due to diverse loads. similar to microgrids, +navy shipboard power systems do not have a slack bus [1]. it can be viewed as +a microgrid always operating in islanding mode. compared with typical terrestrial microgrid, the ratio between the overall load and generation is much +higher [2]. although new avenues such as zonal load architecture [3], high +preprint submitted to international journal of electrical power and energy systemsjanuary 18, 2018",3 +"abstract. let v be a symplectic vector space of dimension 2n. given a partition λ with at most n +parts, there is an associated irreducible representation s[λ] (v ) of sp(v ). this representation admits +a resolution by a natural complex lλ• , which we call the littlewood complex, whose terms are +restrictions of representations of gl(v ). when λ has more than n parts, the representation s[λ] (v ) +is not defined, but the littlewood complex lλ• still makes sense. the purpose of this paper is to +compute its homology. we find that either lλ• is acyclic or it has a unique non-zero homology group, +which forms an irreducible representation of sp(v ). the non-zero homology group, if it exists, can +be computed by a rule reminiscent of that occurring in the borel–weil–bott theorem. this result +can be interpreted as the computation of the “derived specialization” of irreducible representations +of sp(∞), and as such categorifies earlier results of koike–terada on universal character rings. we +prove analogous results for orthogonal and general linear groups. along the way, we will see two +topics from commutative algebra: the minimal free resolutions of determinantal ideals and koszul +homology.",0 +"abstract +we model individual t2dm patient blood glucose level (bgl) by stochastic process with discrete +number of states mainly but not solely governed by medication regimen (e.g. insulin injections). bgl +states change otherwise according to various physiological triggers which render a stochastic, statistically +unknown, yet assumed to be quasi-stationary, nature of the process. in order to express incentive for being +in desired healthy bgl we heuristically define a reward function which returns positive values for desirable +bg levels and negative values for undesirable bg levels. the state space consists of sufficient number of +states in order to allow for memoryless assumption. this, in turn, allows to formulate markov decision +process (mdp), with an objective to maximize the total reward, summarized over a long run. the +probability law is found by model-based reinforcement learning (rl) and the optimal insulin treatment +policy is retrieved from mdp solution.",3 +"abstract smoothness conditions and propose +an orthonormal series estimator which attains the optimal rate of convergence. the performance of the estimator depends on the correct specification of a dimension parameter whose +optimal choice relies on smoothness characteristics of both the intensity and the error density. since a priori knowledge of such characteristics is a too strong assumption, we propose +a data-driven choice of the dimension parameter based on model selection and show that +the adaptive estimator either attains the minimax optimal rate or is suboptimal only by a +logarithmic factor.",10 +abstract,9 +"abstract +we consider the problem of estimating a low-rank signal matrix from noisy measurements +under the assumption that the distribution of the data matrix belongs to an exponential family. in this setting, we derive generalized stein’s unbiased risk estimation (sure) formulas +that hold for any spectral estimators which shrink or threshold the singular values of the +data matrix. this leads to new data-driven spectral estimators, whose optimality is discussed using tools from random matrix theory and through numerical experiments. under +the spiked population model and in the asymptotic setting where the dimensions of the data +matrix are let going to infinity, some theoretical properties of our approach are compared +to recent results on asymptotically optimal shrinking rules for gaussian noise. it also leads +to new procedures for singular values shrinkage in finite-dimensional matrix denoising for +gamma-distributed and poisson-distributed measurements.",10 +"abstract. the k-restricted edge-connectivity of a graph g, denoted by +λk (g), is defined as the minimum size of an edge set whose removal leaves +exactly two connected components each containing at least k vertices. +this graph invariant, which can be seen as a generalization of a minimum edge-cut, has been extensively studied from a combinatorial point +of view. however, very little is known about the complexity of computing λk (g). very recently, in the parameterized complexity community +the notion of good edge separation of a graph has been defined, which +happens to be essentially the same as the k-restricted edge-connectivity. +motivated by the relevance of this invariant from both combinatorial and +algorithmic points of view, in this article we initiate a systematic study of +its computational complexity, with special emphasis on its parameterized +complexity for several choices of the parameters. we provide a number +of np-hardness and w[1]-hardness results, as well as fpt-algorithms. +keywords: graph cut; k-restricted edge-connectivity; good edge separation; parameterized complexity; fpt-algorithm; polynomial kernel.",8 +"abstract +mobile visual search applications are emerging that enable +users to sense their surroundings with smart phones. however, because of the particular challenges of mobile visual +search, achieving a high recognition bitrate has becomes a +consistent target of previous related works. in this paper, +we propose a few-parameter, low-latency, and high-accuracy +deep hashing approach for constructing binary hash codes for +mobile visual search. first, we exploit the architecture of +the mobilenet model, which significantly decreases the latency of deep feature extraction by reducing the number of +model parameters while maintaining accuracy. second, we +add a hash-like layer into mobilenet to train the model on labeled mobile visual data. evaluations show that the proposed +system can exceed state-of-the-art accuracy performance in +terms of the map. more importantly, the memory consumption is much less than that of other deep learning models. the +proposed method requires only 13 mb of memory for the neural network and achieves a map of 97.80% on the mobile +location recognition dataset used for testing. +index terms— mobile visual search, supervised hashing, binary code, deep learning +1. introduction +with the proliferation of mobile devices, it is becoming possible to use mobile perception functionalities (e.g., cameras, +gps, and wi-fi) to perceive the surrounding environment [1]. +among such techniques, mobile visual search plays a key role +in mobile localization, mobile media search, and mobile social networking. however, rather than simply porting traditional visual search methods to mobile platforms, for mobile +visual search, one must face the challenges of a large auralvisual variance of queries, stringent memory and computation +constraints, network bandwidth limitations, and the desire for +an instantaneous search experience. +this work was partially supported by the ccf-tencent open research +fund (no. agr20160113), the national natural science foundation of +china (no. 61632008), and the fundamental research funds for the central +universities (no. 2016rcgd32).",1 +"abstract +we review boltzmann machines extended for time-series. these models often have recurrent +structure, and back propagration through time (bptt) is used to learn their parameters. the perstep computational complexity of bptt in online learning, however, grows linearly with respect +to the length of preceding time-series (i.e., learning rule is not local in time), which limits the +applicability of bptt in online learning. we then review dynamic boltzmann machines (dybms), +whose learning rule is local in time. dybm’s learning rule relates to spike-timing dependent +plasticity (stdp), which has been postulated and experimentally confirmed for biological neural +networks.",9 +"abstract +this paper presents two realizations of linear quantum systems for covariance assignment corresponding to pure gaussian states. the +first one is called a cascade realization; given any covariance matrix corresponding to a pure gaussian state, we can construct a cascaded +quantum system generating that state. the second one is called a locally dissipative realization; given a covariance matrix corresponding +to a pure gaussian state, if it satisfies certain conditions, we can construct a linear quantum system that has only local interactions with +its environment and achieves the assigned covariance matrix. both realizations are illustrated by examples from quantum optics. +key words: linear quantum system, cascade realization, locally dissipative realization, covariance assignment, pure gaussian state.",3 +"abstract +this paper considers the synchronization problem for networks of coupled nonlinear dynamical systems under switching communication topologies. two types of nonlinear agent +dynamics are considered. the first one is non-expansive dynamics (stable dynamics with +a convex lyapunov function ϕ(·)) and the second one is dynamics that satisfies a global +lipschitz condition. for the non-expansive case, we show that various forms of joint connectivity for communication graphs are sufficient for networks to achieve global asymptotic +ϕ-synchronization. we also show that ϕ-synchronization leads to state synchronization provided that certain additional conditions are satisfied. for the globally lipschitz case, unlike +the non-expansive case, joint connectivity alone is not sufficient for achieving synchronization. a sufficient condition for reaching global exponential synchronization is established in +terms of the relationship between the global lipschitz constant and the network parameters. +we also extend the results to leader-follower networks.",3 +"abstracts +franck dernoncourt∗ +mit +francky@mit.edu",9 +"abstract. the subject of this work is quantum predicative programming — the study of developing of programs intended for execution on +a quantum computer. we look at programming in the context of formal +methods of program development, or programming methodology. our +work is based on probabilistic predicative programming, a recent generalisation of the well-established predicative programming. it supports +the style of program development in which each programming step is +proven correct as it is made. we inherit the advantages of the theory, +such as its generality, simple treatment of recursive programs, time and +space complexity, and communication. our theory of quantum programming provides tools to write both classical and quantum specifications, +develop quantum programs that implement these specifications, and reason about their comparative time and space complexity all in the same +framework.",6 +"abstract we report on an extensive study of the current benefits and limitations of deep learning approaches +to robot vision and introduce a novel dataset used for +our investigation. to avoid the biases in currently available datasets, we consider a human-robot interaction +setting to design a data-acquisition protocol for visual +object recognition on the icub humanoid robot. considering the performance of off-the-shelf models trained +on off-line large-scale image retrieval datasets, we show +the necessity for knowledge transfer. indeed, we analyze different ways in which this last step can be done, +and identify the major bottlenecks in robotics scenarios. +by studying both object categorization and identification tasks, we highlight the key differences between object recognition in robotics and in image retrieval tasks, +for which the considered deep learning approaches have +been originally designed. in a nutshell, our results confirm also in the considered setting the remarkable improvements yield by deep learning, while pointing to +specific open challenges that need to be addressed for +seamless deployment in robotics.",1 +"abstract—a source submits status updates to a network for +delivery to a destination monitor. updates follow a route through +a series of network nodes. each node is a last-come-first-served +queue supporting preemption in service. we characterize the +average age of information at the input and output of each node +in the route induced by the updates passing through. for poisson +arrivals to a line network of preemptive memoryless servers, we +show that average age accumulates through successive network +nodes.",7 +"abstract—we use decision trees to build a helpdesk agent +reference network to facilitate the on-the-job advising of junior +or less experienced staff on how to better address +telecommunication customer fault reports. such reports generate +field measurements and remote measurements which, when +coupled with location data and client attributes, and fused with +organization-level statistics, can produce models of how support +should be provided. beyond decision support, these models can +help identify staff who can act as advisors, based on the quality, +consistency and predictability of dealing with complex +troubleshooting reports. advisor staff models are then used to +guide less experienced staff in their decision making; thus, we +advocate the deployment of a simple mechanism which exploits +the availability of staff with a sound track record at the helpdesk +to act as dormant tutors. +index terms— customer relationship management; decision +trees; knowledge flow graph",2 +"abstract +in multiagent systems, we often have a set of agents each of which have a preference ordering +over a set of items and one would like to know these preference orderings for various tasks, for +example, data analysis, preference aggregation, voting etc. however, we often have a large +number of items which makes it impractical to ask the agents for their complete preference +ordering. in such scenarios, we usually elicit these agents’ preferences by asking (a hopefully +small number of) comparison queries — asking an agent to compare two items. prior works on +preference elicitation focus on unrestricted domain and the domain of single peaked preferences +and show that the preferences in single peaked domain can be elicited by much less number of +queries compared to unrestricted domain. we extend this line of research and study preference +elicitation for single peaked preferences on trees which is a strict superset of the domain of single +peaked preferences. we show that the query complexity crucially depends on the number of +leaves, the path cover number, and the distance from path of the underlying single peaked tree, +whereas the other natural parameters like maximum degree, diameter, pathwidth do not play +any direct role in determining query complexity. we then investigate the query complexity for +finding a weak condorcet winner for preferences single peaked on a tree and show that this +task has much less query complexity than preference elicitation. here again we observe that the +number of leaves in the underlying single peaked tree and the path cover number of the tree +influence the query complexity of the problem.",8 +"abstract. scheduling theory is an old and well-established area in combinatorial optimization, whereas +the much younger area of parameterized complexity has only recently gained the attention of the community. our aim is to bring these two areas closer together by studying the parameterized complexity +of a class of single-machine two-agent scheduling problems. our analysis focuses on the case where the +number of jobs belonging to the second agent is considerably smaller than the number of jobs belonging +to the first agent, and thus can be considered as a fixed parameter k. we study a variety of combinations +of scheduling criteria for the two agents, and for each such combination we pinpoint its parameterized +complexity with respect to the parameter k. the scheduling criteria that we analyze include the total +weighted completion time, the total weighted number of tardy jobs, and the total weighted number of +just-in-time jobs. our analysis draws a borderline between tractable and intractable variants of these +problems. +⋆",8 +"abstract +we study the capacitated k-median problem for which existing constant-factor approximation algorithms are all pseudo-approximations that violate either the capacities or the upper +bound k on the number of open facilities. using the natural lp relaxation for the problem, one +can only hope to get the violation factor down to 2. li [soda’16] introduced a novel lp to go +beyond the limit of 2 and gave a constant-factor approximation algorithm that opens (1 + )k +facilities. +we use the configuration lp of li [soda’16] to give a constant-factor approximation for +the capacitated k-median problem in a seemingly harder configuration: we violate only the +capacities by 1 + . this result settles the problem as far as pseudo-approximation algorithms +are concerned.",8 +"abstract. a locally compact groupoid is said to have the weak containment property if its +full c ∗ -algebra coincide with its reduced one. although it is now known that this property is +strictly weaker than amenability, we show that the two properties are the same under a mild +exactness assumption. then we apply our result to get informations about the corresponding +weak containment property for some semigroups.",4 +"abstract: in linear regression with fixed design, we propose two procedures that aggregate a data-driven collection of supports. the collection +is a subset of the 2p possible supports and both its cardinality and its elements can depend on the data. the procedures satisfy oracle inequalities +with no assumption on the design matrix. then we use these procedures +to aggregate the supports that appear on the regularization path of the +lasso in order to construct an estimator that mimics the best lasso +estimator. if the restricted eigenvalue condition on the design matrix is +satisfied, then this estimator achieves optimal prediction bounds. finally, +we discuss the computational cost of these procedures.",10 +"abstract— the paper considers the problem of cooperative +estimation for a linear uncertain plant observed by a network of +communicating sensors. we take a novel approach by treating +the filtering problem from the view point of local sensors while +the network interconnections are accounted for via an uncertain +signals modelling of estimation performance of other nodes. +that is, the information communicated between the nodes is +treated as the true plant information subject to perturbations, +and each node is endowed with certain believes about these +perturbations during the filter design. the proposed distributed +filter achieves a suboptimal h∞ consensus performance. furthermore, local performance of each estimator is also assessed +given additional constraints on the performance of the other +nodes. these conditions are shown to be useful in tuning the +desired estimation performance of the sensor network.",3 +"abstracting from these details, the fft and ifft take +up a significant amount of compute resources, which we address in section 5. +table 5: cufft convolution performance breakdown (k40m, ms) +layer +l1 +fprop +bprop +accgrad +l2 +fprop +bprop +accgrad +l3 +fprop +bprop +accgrad +l4 +fprop +bprop +accgrad +l5 +fprop +bprop +accgrad",9 +"abstraction for networks of control systems: a +dissipativity approach",3 +"abstract. completely random measures (crms) and their normalizations +are a rich source of bayesian nonparametric priors. examples include the beta, +gamma, and dirichlet processes. in this paper we detail two major classes +of sequential crm representations—series representations and superposition +representations—within which we organize both novel and existing sequential +representations that can be used for simulation and posterior inference. these +two classes and their constituent representations subsume existing ones that +have previously been developed in an ad hoc manner for specific processes. +since a complete infinite-dimensional crm cannot be used explicitly for computation, sequential representations are often truncated for tractability. we +provide truncation error analyses for each type of sequential representation, +as well as their normalized versions, thereby generalizing and improving upon +existing truncation error bounds in the literature. we analyze the computational complexity of the sequential representations, which in conjunction with +our error bounds allows us to directly compare representations and discuss +their relative efficiency. we include numerous applications of our theoretical +results to commonly-used (normalized) crms, demonstrating that our results +enable a straightforward representation and analysis of crms that has not +previously been available in a bayesian nonparametric context.",10 +"abstract. this note presents a discussion of the algebraic and combinatorial aspects +of the theory of pure o-sequences. various instances where pure o-sequences appear are +described. several open problems that deserve further investigation are also presented.",0 +"abstract +by taking a variety of realistic hardware imperfections into consideration, we propose an optimal +power allocation (opa) strategy to maximize the instantaneous secrecy rate of a cooperative wireless +network comprised of a source, a destination and an untrusted amplify-and-forward (af) relay. we +assume that either the source or the destination is equipped with a large-scale multiple antennas +(lsma) system, while the rest are equipped with a single antenna. to prevent the untrusted relay from +intercepting the source message, the destination sends an intended jamming noise to the relay, which is +referred to as destination-based cooperative jamming (dbcj). given this system model, novel closedform expressions are presented in the high signal-to-noise ratio (snr) regime for the ergodic secrecy +rate (esr) and the secrecy outage probability (sop). we further improve the secrecy performance +of the system by optimizing the associated hardware design. the results reveal that by beneficially +distributing the tolerable hardware imperfections across the transmission and reception radio-frequency +(rf) front ends of each node, the system’s secrecy rate may be improved. the engineering insight is +that equally sharing the total imperfections at the relay between the transmitter and the receiver provides +the best secrecy performance. numerical results illustrate that the proposed opa together with the most +appropriate hardware design significantly increases the secrecy rate.",7 +"abstract +feature learning with deep models has achieved impressive results for both data representation and classification +for various vision tasks. deep feature learning, however, +typically requires a large amount of training data, which +may not be feasible for some application domains. transfer +learning can be one of the approaches to alleviate this problem by transferring data from data-rich source domain to +data-scarce target domain. existing transfer learning methods typically perform one-shot transfer learning and often +ignore the specific properties that the transferred data must +satisfy. to address these issues, we introduce a constrained +deep transfer feature learning method to perform simultaneous transfer learning and feature learning by performing +transfer learning in a progressively improving feature space +iteratively in order to better narrow the gap between the target domain and the source domain for effective transfer of +the data from source domain to target domain. furthermore, we propose to exploit the target domain knowledge +and incorporate such prior knowledge as constraint during +transfer learning to ensure that the transferred data satisfies +certain properties of the target domain. +to demonstrate the effectiveness of the proposed constrained deep transfer feature learning method, we apply +it to thermal feature learning for eye detection by transferring from the visible domain. we also applied the proposed +method for cross-view facial expression recognition as a +second application. the experimental results demonstrate +the effectiveness of the proposed method for both applications.",1 +"abstract—we consider a finite-horizon linear-quadratic optimal control problem where only a limited number of control +messages are allowed for sending from the controller to the +actuator. to restrict the number of control actions computed +and transmitted by the controller, we employ a threshold-based +event-triggering mechanism that decides whether or not a control +message needs to be calculated and delivered. due to the nature of +threshold-based event-triggering algorithms, finding the optimal +control sequence requires minimizing a quadratic cost function +over a non-convex domain. in this paper, we firstly provide +an exact solution to the non-convex problem mentioned above +by solving an exponential number of quadratic programs. to +reduce computational complexity, we, then, propose two efficient +heuristic algorithms based on greedy search and the alternating +direction method of multipliers (admm) method. later, we +consider a receding horizon control strategy for linear systems +controlled by event-triggered controllers, and we also provide +a complete stability analysis of receding horizon control that +uses finite horizon optimization in the proposed class. numerical +examples testify to the viability of the presented design technique. +index terms — optimal control; linear systems; eventtriggered control; receding horizon control",3 +"abstract +advancement in technology has generated abundant high-dimensional data that allows integration of multiple relevant studies. due to their huge computational advantage, variable screening methods based on marginal correlation have become promising +alternatives to the popular regularization methods for variable selection. however, all +these screening methods are limited to single study so far. in this paper, we consider +a general framework for variable screening with multiple related studies, and further +propose a novel two-step screening procedure using a self-normalized estimator for highdimensional regression analysis in this framework. compared to the one-step procedure +and rank-based sure independence screening (sis) procedure, our procedure greatly reduces false negative errors while keeping a low false positive rate. theoretically, we +show that our procedure possesses the sure screening property with weaker assumptions on signal strengths and allows the number of features to grow at an exponential +rate of the sample size. in addition, we relax the commonly used normality assumption +and allow sub-gaussian distributions. simulations and a real transcriptomic application illustrate the advantage of our method as compared to the rank-based sis method. +key words and phrases: multiple studies, partial faithfulness, self-normalized estimator, sure screening property, variable selection",10 +"abstract +in the paper, a parallel tabu search algorithm for the resource constrained project scheduling problem is proposed. to deal with this np-hard combinatorial problem many optimizations have been +performed. for example, a resource evaluation algorithm is selected by a heuristic and an effective +tabu list was designed. in addition to that, a capacity-indexed resource evaluation algorithm was +proposed and the gpu (graphics processing unit) version uses a homogeneous model to reduce the +required communication bandwidth. according to the experiments, the gpu version outperforms the +optimized parallel cpu version with respect to the computational time and the quality of solutions. +in comparison with other existing heuristics, the proposed solution often gives better quality solutions. +cite as: libor bukata, premysl sucha, zdenek hanzalek, solving the resource constrained project +scheduling problem using the parallel tabu search designed for the cuda platform, journal of parallel +and distributed computing, volume 77, march 2015, pages 58-68, issn 0743-7315, http://dx.doi. +org/10.1016/j.jpdc.2014.11.005. +source code: https://github.com/ctu-iig/rcpspcpu, https://github.com/ctu-iig/rcpspgpu +keywords: resource constrained project scheduling problem, parallel tabu search, cuda, +homogeneous model, gpu +1. introduction +the resource constrained project scheduling +problem (rcpsp), which has a wide range of applications in logistics, manufacturing and project +management [1], is a universal and well-known +problem in the operations research domain. the +problem can be briefly described using a set of +∗",2 +"abstract—this paper is concerned with optimization of +distributed broadband wireless communication (bwc) systems. +bwc systems contain a distributed antenna system (das) +connected to a base station with optical fiber. distributed bwc +systems have been proposed as a solution to the power constraint +problem in traditional cellular networks. so far, the research on +bwc systems have advanced on two separate tracks, design of +the system to meet the quality of service requirements (qos) and +optimization of location of the das. in this paper, we consider a +combined optimization of bwc systems. we consider uplink +communications in distributed bwc systems with multiple levels +of priority traffic with arrivals and departures forming renewal +processes. we develop an analysis that determines packet delay +violation probability for each priority level as a function of the +outage probability of the das through the application of results +from renewal theory. then, we determine the optimal locations of +the antennas that minimize the antenna outage probability. we +also study the trade off between the packet delay violation +probability and packet loss probability. this work will be helpful +in the designing of the distributed bwc systems. +index terms— queuing delay, multiple levels of priority +traffic, distributed antenna system (das), outage probability, +antenna placement.",1 +"abstract +like classical block codes, a locally repairable code also obeys the singleton-type bound (we call +a locally repairable code optimal if it achieves the singleton-type bound). in the breakthrough work of +[14], several classes of optimal locally repairable codes were constructed via subcodes of reed-solomon +codes. thus, the lengths of the codes given in [14] are upper bounded by the code alphabet size q. +recently, it was proved through extension of construction in [14] that length of q-ary optimal locally +repairable codes can be q + 1 in [7]. surprisingly, [2] presented a few examples of q-ary optimal locally +repairable codes of small distance and locality with code length achieving roughly q 2 . very recently, +it was further shown in [8] that there exist q-ary optimal locally repairable codes with length bigger +than q + 1 and distance propositional to n. thus, it becomes an interesting and challenging problem to +construct new families of q-ary optimal locally repairable codes of length bigger than q + 1. +in this paper, we construct a class of optimal locally repairable codes of distance 3 and 4 with unbounded length (i.e., length of the codes is independent of the code alphabet size). our technique is +through cyclic codes with particular generator and parity-check polynomials that are carefully chosen.",7 +"abstract + +let b = b (x) , x ∈ s2 be the fractional brownian motion indexed +by the unit sphere s2 with index 0 < h ≤ 21 , introduced by istas [12]. +we establish optimal upper and lower bounds for its angular power spectrum {dℓ , ℓ = 0, 1, 2, . . .}, and then exploit its high-frequency behavior to +establish the property of its strong local nondeterminism of b.",10 +"abstract)∗ +davide corona†",5 +"abstract +we give effective proofs of residual finiteness and conjugacy separability for finitely generated +nilpotent groups. in particular, we give precise asymptotic bounds for a function introduced +by bou-rabee that measures how large the quotients that are needed to separate non-identity +elements of bounded length from the identity which improves the work of bou-rabee. similarly, we give polynomial upper and lower bounds for an analogous function introduced by +lawton, louder, and mcreynolds that measures how large the quotients that are needed to +separate pairs of distinct conjugacy classes of bounded word length using work of blackburn +and mal’tsev.",4 +"abstract—in a device-to-device (d2d) underlaid massive +mimo system, d2d transmitters reuse the uplink spectrum +of cellular users (cus), leading to cochannel interference. to +decrease pilot overhead, we assume pilot reuse (pr) among +d2d pairs. we first derive the minimum-mean-square-error +(mmse) estimation of all channels and give a lower bound +on the ergodic achievable rate of both cellular and d2d links. +to mitigate pilot contamination caused by pr, we then propose +a pilot scheduling and pilot power control algorithm based on +the criterion of minimizing the sum mean-square-error (mse) +of channel estimation of d2d links. we show that, with an +appropriate pr ratio and a well designed pilot scheduling scheme, +each d2d transmitter could transmit its pilot with maximum +power. in addition, we also maximize the sum rate of all d2d +links while guaranteeing the quality of service (qos) of cus, and +develop an iterative algorithm to obtain a suboptimal solution. +simulation results show that the effect of pilot contamination can +be greatly decreased by the proposed pilot scheduling algorithm, +and the pr scheme provides significant performance gains over +the conventional orthogonal training scheme in terms of system +spectral efficiency.",7 +"abstract +nitsche’s method is a popular approach to implement dirichlet-type boundary conditions in situations where +a strong imposition is either inconvenient or simply not feasible. the method is widely applied in the context +of unfitted finite element methods. from the classical (symmetric) nitsche’s method it is well-known that +the stabilization parameter in the method has to be chosen sufficiently large to obtain unique solvability +of discrete systems. in this short note we discuss an often used strategy to set the stabilization parameter +and describe a possible problem that can arise from this. we show that in specific situations error bounds +can deteriorate and give examples of computations where nitsche’s method yields large and even diverging +discretization errors. +keywords: nitsche’s method, unfitted/immersed finite element methods, penalty/stabilization parameter, +accuracy, stability, error analysis",5 +"abstract +we investigate the performance of the finite volume method in solving viscoplastic flows. the creeping +square lid-driven cavity flow of a bingham plastic is chosen as the test case and the constitutive equation +is regularised as proposed by papanastasiou [j. rheology 31 (1987) 385-404]. it is shown that the +convergence rate of the standard simple pressure-correction algorithm, which is used to solve the +algebraic equation system that is produced by the finite volume discretisation, severely deteriorates as +the bingham number increases, with a corresponding increase in the non-linearity of the equations. it +is shown that using the simple algorithm in a multigrid context dramatically improves convergence, +although the multigrid convergence rates are much worse than for newtonian flows. the numerical +results obtained for bingham numbers as high as 1000 compare favorably with reported results of other +methods. +keywords: bingham plastic, papanastasiou regularisation, lid-driven cavity, finite volume method, +simple, multigrid +this is the accepted version of the article published in: journal of non-newtonian fluid mechanics +195 (2013) 19–31, doi:10.1016/j.jnnfm.2012.12.008 +c 2016. this manuscript version is made available under the cc-by-nc-nd 4.0 license http: +//creativecommons.org/licenses/by-nc-nd/4.0/ +1. introduction +viscoplastic flows constitute an important branch of non-newtonian fluid mechanics, as many materials of industrial, geophysical, and biological importance are known to exhibit yield stress. in general, +yield-stress fluids are suspensions of particles or macromolecules, such as pastes, gels, foams, drilling +fluids, food products, and nanocomposites. a comprehensive review of viscoplasticity has been carried +out by barnes [1]. such materials behave as (elastic or inelastic) solids, below a certain critical shear +stress level, i.e. the yield stress, and as liquids otherwise. the flow field is thus divided into unyielded +(rigid) and yielded (fluid) regions. +∗",5 +"abstract +for computer vision applications, prior works have shown the efficacy of reducing +numeric precision of model parameters (network weights) in deep neural networks. +activation maps, however, occupy a large memory footprint during both the training and inference step when using mini-batches of inputs. one way to reduce this +large memory footprint is to reduce the precision of activations. however, past +works have shown that reducing the precision of activations hurts model accuracy. +we study schemes to train networks from scratch using reduced-precision activations without hurting accuracy. we reduce the precision of activation maps (along +with model parameters) and increase the number of filter maps in a layer, and find +that this scheme matches or surpasses the accuracy of the baseline full-precision +network. as a result, one can significantly improve the execution efficiency (e.g. +reduce dynamic memory footprint, memory bandwidth and computational energy) +and speed up the training and inference process with appropriate hardware support. we call our scheme wrpn - wide reduced-precision networks. we report +results and show that wrpn scheme is better than previously reported accuracies +on ilsvrc-12 dataset while being computationally less expensive compared to +previously reported reduced-precision networks.",9 +"abstract. let g be an abelian group and s be a g-graded a noetherian algebra over a +commutative ring a ⊆ s0 . let i1 , . . . , is be g-homogeneous ideals in s, and let m be a +finitely generated g-graded s-module. we show that the shape of nonzero g-graded betti +numbers of m i1t1 . . . ists exhibit an eventual linear behavior as the ti s get large.",0 +"abstract. it is well-known that the factorization properties of a domain are +reflected in the structure of its group of divisibility. the main theme of this +paper is to introduce a topological/graph-theoretic point of view to the current understanding of factorization in integral domains. we also show that +connectedness properties in the graph and topological space give rise to a +generalization of atomicity.",0 +"abstract +the large-system performance of maximum-a-posterior estimation is studied considering a general distortion +function when the observation vector is received through a linear system with additive white gaussian noise. the +analysis considers the system matrix to be chosen from the large class of rotationally invariant random matrices. we +take a statistical mechanical approach by introducing a spin glass corresponding to the estimator, and employing the +replica method for the large-system analysis. in contrast to earlier replica based studies, our analysis evaluates the +general replica ansatz of the corresponding spin glass and determines the asymptotic distortion of the estimator for +any structure of the replica correlation matrix. consequently, the replica symmetric as well as the replica symmetry +breaking ansatz with b steps of breaking is deduced from the given general replica ansatz. the generality of our +distortion function lets us derive a more general form of the maximum-a-posterior decoupling principle. based on +the general replica ansatz, we show that for any structure of the replica correlation matrix, the vector-valued system +decouples into a bank of equivalent decoupled linear systems followed by maximum-a-posterior estimators. the +structure of the decoupled linear system is further studied under both the replica symmetry and the replica symmetry +breaking assumptions. for b steps of symmetry breaking, the decoupled system is found to be an additive system +with a noise term given as the sum of an independent gaussian random variable with b correlated impairment +terms. the general decoupling property of the maximum-a-posterior estimator leads to the idea of a replica simulator +which represents the replica ansatz through the state evolution of a transition system described by its corresponding +decoupled system. as an application of our study, we investigate large compressive sensing systems by considering +the ℓp norm minimization recovery schemes. our numerical investigations show that the replica symmetric ansatz for +ℓ0 norm recovery fails to give an accurate approximation of the mean square error as the compression rate grows, +and therefore, the replica symmetry breaking ansätze are needed in order to assess the performance precisely. +index terms +maximum-a-posterior estimation, linear vector channel, decoupling principle, equivalent single-user system, compressive sensing, zero norm, replica method, statistical physics, replica symmetry breaking, replica simulator +the results of this manuscript were presented in parts at 2016 ieee information theory workshop (itw) [78] and 2017 ieee information +theory and applications workshop (ita) [79]. +this work was supported by the german research foundation, deutsche forschungsgemeinschaft (dfg), under grant no. mu 3735/2-1. +ali bereyhi and ralf r. müller are with the institute for digital communications (idc), friedrich alexander university of erlangen-nürnberg +(fau), konrad-zuse-straße 5, 91052, erlangen, bavaria, germany (e-mails: ali.bereyhi@fau.de, ralf.r.mueller@fau.de). +hermann schulz-baldes is with the department of mathematics, fau, cauerstraße 11, 91058, erlangen, bavaria, germany (e-mail: schuba@ +mi.uni-erlangen.de).",7 +"abstract +hi, robby, can you get my cup +from the cupboard?",1 +"abstract. we prove that the autonomous norm on the group of +compactly supported hamiltonian diffeomorphisms of the standard +r2n is bounded.",4 +"abstract +to help mitigate road congestion caused by the unrelenting growth of traffic demand, many transit +authorities have implemented managed lane policies. managed lanes typically run parallel to a freeway’s +standard, general-purpose (gp) lanes, but are restricted to certain types of vehicles. it was originally +thought that managed lanes would improve the use of existing infrastructure through incentivization of +demand-management behaviors like carpooling, but implementations have often been characterized by +unpredicted phenomena that is often to detrimental system performance. development of traffic models +that can capture these sorts of behaviors is a key step for helping managed lanes deliver on their promised +gains. +towards this goal, this paper presents several macroscopic traffic modeling tools we have used for +study of freeways equipped with managed lanes, or “managed lane-freeway networks.” the proposed +framework is based on the widely-used first-order kinematic wave theory. in this model, the gp and +the managed lanes are modeled as parallel links connected by nodes, where certain type of traffic may +switch between gp and managed lane links. two types of managed lane configuration are considered: +full-access, where vehicles can switch between the gp and the managed lanes anywhere; and separated, +where such switching is allowed only at certain locations called gates. +we incorporate two phenomena into our model that are particular to managed lane-freeway networks: +the inertia effect and the friction effect. the inertia effect reflects drivers’ inclination to stay in their lane +as long as possible and switch only if this would obviously improve their travel condition. the friction +effect reflects the empirically-observed driver fear of moving fast in a managed lane while traffic in the +adjacent gp links moves slowly due to congestion. +calibration of models of large road networks is difficult, as the dynamics depend on many parameters whose numbers grow with the network’s size. we present an iterative learning-based approach +to calibrating our model’s physical and driver-behavioral parameters. finally, we validate our model +and calibration methodology with case studies of simulations of two managed lane-equipped california +freeways.",3 +"abstract +compiler working with abstract functions modelled directly in the theorem +prover’s logic is defined and proven sound. then, this compiler is refined +to a concrete version that returns a target-language expression.",6 +abstract,6 +"abstract—this paper proposes xml-defined network policies +(xdnp), a new high-level language based on xml notation, +to describe network control rules in software defined network +environments. we rely on existing openflow controllers specifically floodlight but the novelty of this project is to separate +complicated language- and framework-specific apis from policy +descriptions. this separation makes it possible to extend the +current work as a northbound higher level abstraction that +can support a wide range of controllers who are based on +different programming languages. by this approach, we believe +that network administrators can develop and deploy network +control policies easier and faster. +index terms—software defined networks; openflow; floodlight; sdn compiler; sdn programming languages; sdn abstraction.",6 +"abstract +in this paper we present linear time approximation schemes for several generalized matching problems on nonbipartite graphs. our results include o (m)-time algorithms for (1 − )maximum weight f -factor and (1 + )-approximate minimum weight f -edge cover. as a byproduct, we p +also obtain direct algorithms for the exact cardinality versions of these problems running +in o(m f (v )) time. +the technical contributions of this work include an efficient method for maintaining relaxed complementary slackness in generalized matching problems and approximation-preserving +reductions between the f -factor and f -edge cover problems.",8 +"abstract. we call an ideal in a polynomial ring robust if it can be minimally generated +by a universal gröbner basis. in this paper we show that robust toric ideals generated +by quadrics are essentially determinantal. we then discuss two possible generalizations to +higher degree, providing a tight classification for determinantal ideals, and a counterexample +to a natural extension for lawrence ideals. we close with a discussion of robustness of higher +betti numbers.",0 +"abstract +the major challenges of automatic track counting are distinguishing tracks and +material defects, identifying small tracks and defects of similar size, and detecting overlapping tracks. here we address the latter issue using wusem, +an algorithm which combines the watershed transform, morphological erosions +and labeling to separate regions in photomicrographs. wusem shows reliable +results when used in photomicrographs presenting almost isotropic objects. we +tested this method in two datasets of diallyl phthalate (dap) photomicrographs +and compared the results when counting manually and using the classic watershed. the mean automatic/manual efficiency ratio when using wusem in the +test datasets is 0.97 ± 0.11. +keywords: automatic counting, diallyl phthalate, digital image processing, +fission track dating",1 +"abstract—this paper considers the massive connectivity application in which a large number of potential devices communicate +with a base-station (bs) in a sporadic fashion. the detection of +device activity pattern together with the estimation of the channel +are central problems in such a scenario. due to the large number +of potential devices in the network, the devices need to be assigned +non-orthogonal signature sequences. the main objective of this +paper is to show that by using random signature sequences and +by exploiting sparsity in the user activity pattern, the joint user +detection and channel estimation problem can be formulated as +a compressed sensing single measurement vector (smv) problem +or multiple measurement vector (mmv) problem, depending on +whether the bs has a single antenna or multiple antennas, and be +efficiently solved using an approximate message passing (amp) +algorithm. this paper proposes an amp algorithm design that +exploits the statistics of the wireless channel and provides an +analytical characterization of the probabilities of false alarm and +missed detection by using the state evolution. we consider two +cases depending on whether the large-scale component of the +channel fading is known at the bs and design the minimum +mean squared error (mmse) denoiser for amp according to the +channel statistics. simulation results demonstrate the substantial +advantage of exploiting the statistical channel information in +amp design; however, knowing the large-scale fading component +does not offer tangible benefits. for the multiple-antenna case, +we employ two different amp algorithms, namely the amp with +vector denoiser and the parallel amp-mmv, and quantify the +benefit of deploying multiple antennas at the bs. +index terms—device activity detection, channel estimation, +approximate message passing, compressed sensing, internet of +things (iot), machine-type communications (mtc)",7 +"abstract. we give a new, simple distributed algorithm for graph colouring in paths and +cycles. our algorithm is fast and self-contained, it does not need any globally consistent +orientation, and it reduces the number of colours from 10100 to 3 in three iterations.",8 +"abstract. this paper provides an induction rule that can be used to prove properties of +data structures whose types are inductive, i.e., are carriers of initial algebras of functors. +our results are semantic in nature and are inspired by hermida and jacobs’ elegant algebraic formulation of induction for polynomial data types. our contribution is to derive, +under slightly different assumptions, a sound induction rule that is generic over all inductive types, polynomial or not. our induction rule is generic over the kinds of properties to +be proved as well: like hermida and jacobs, we work in a general fibrational setting and +so can accommodate very general notions of properties on inductive types rather than just +those of a particular syntactic form. we establish the soundness of our generic induction +rule by reducing induction to iteration. we then show how our generic induction rule can +be instantiated to give induction rules for the data types of rose trees, finite hereditary +sets, and hyperfunctions. the first of these lies outside the scope of hermida and jacobs’ +work because it is not polynomial, and as far as we are aware, no induction rules have +been known to exist for the second and third in a general fibrational framework. our +instantiation for hyperfunctions underscores the value of working in the general fibrational +setting since this data type cannot be interpreted as a set.",6 +"abstract. step-indexed semantic interpretations of types were proposed as an alternative +to purely syntactic proofs of type safety using subject reduction. the types are interpreted +as sets of values indexed by the number of computation steps for which these values are +guaranteed to behave like proper elements of the type. building on work by ahmed, appel +and others, we introduce a step-indexed semantics for the imperative object calculus of +abadi and cardelli. providing a semantic account of this calculus using more ‘traditional’, +domain-theoretic approaches has proved challenging due to the combination of dynamically +allocated objects, higher-order store, and an expressive type system. here we show that, +using step-indexing, one can interpret a rich type discipline with object types, subtyping, +recursive and bounded quantified types in the presence of state.",6 +"abstract +it was shown recently that the k l1-norm principal components (l1-pcs) of a real-valued data matrix x ∈ +rd×n (n data samples of d dimensions) can be exactly calculated with cost o(2n k ) or, when advantageous, +o(n dk−k+1 ) where d = rank(x), k < d [1], [2]. in applications where x is large (e.g., “big” data of large n +and/or “heavy” data of large d), these costs are prohibitive. in this work, we present a novel suboptimal algorithm +for the calculation of the k < d l1-pcs of x of cost o(n dmin{n, d} + n 2 (k 4 + dk 2 ) + dn k 3 ), which +is comparable to that of standard (l2-norm) pc analysis. our theoretical and experimental studies show that the +proposed algorithm calculates the exact optimal l1-pcs with high frequency and achieves higher value in the l1-pc +optimization metric than any known alternative algorithm of comparable computational cost. the superiority of the +calculated l1-pcs over standard l2-pcs (singular vectors) in characterizing potentially faulty data/measurements is +demonstrated with experiments on data dimensionality reduction and disease diagnosis from genomic data.",8 +"abstract—we present a structural clustering algorithm for +large-scale datasets of small labeled graphs, utilizing a frequent +subgraph sampling strategy. a set of representatives provides +an intuitive description of each cluster, supports the clustering process, and helps to interpret the clustering results. the +projection-based nature of the clustering approach allows us to +bypass dimensionality and feature extraction problems that arise +in the context of graph datasets reduced to pairwise distances +or feature vectors. while achieving high quality and (human) +interpretable clusterings, the runtime of the algorithm only grows +linearly with the number of graphs. furthermore, the approach is +easy to parallelize and therefore suitable for very large datasets. +our extensive experimental evaluation on synthetic and real +world datasets demonstrates the superiority of our approach over +existing structural and subspace clustering algorithms, both, from +a runtime and quality point of view.",8 +abstract,6 +"abstract +as nuclear power expands, technical, economic, political, and environmental analyses of nuclear fuel cycles +by simulators increase in importance. to date, however, current tools are often fleet-based rather than +discrete and restrictively licensed rather than open source. each of these choices presents a challenge to +modeling fidelity, generality, efficiency, robustness, and scientific transparency. the cyclus nuclear fuel +cycle simulator framework and its modeling ecosystem incorporate modern insights from simulation science +and software architecture to solve these problems so that challenges in nuclear fuel cycle analysis can be +better addressed. a summary of the cyclus fuel cycle simulator framework and its modeling ecosystem are +presented. additionally, the implementation of each is discussed in the context of motivating challenges in +nuclear fuel cycle simulation. finally, the current capabilities of cyclus are demonstrated for both open +and closed fuel cycles. +keywords: nuclear fuel cycle, simulation, agent based modeling, nuclear engineering, object orientation, +systems analysis",5 +"abstract +this paper describes autonomous racing of rc race cars based on mathematical optimization. using a dynamical +model of the vehicle, control inputs are computed by receding horizon based controllers, where the objective is to +maximize progress on the track subject to the requirement of staying on the track and avoiding opponents. two +different control formulations are presented. the first controller employs a two-level structure, consisting of a path +planner and a nonlinear model predictive controller (nmpc) for tracking. the second controller combines both +tasks in one nonlinear optimization problem (nlp) following the ideas of contouring control. linear time varying +models obtained by linearization are used to build local approximations of the control nlps in the form of convex +quadratic programs (qps) at each sampling time. the resulting qps have a typical mpc structure and can be solved +in the range of milliseconds by recent structure exploiting solvers, which is key to the real-time feasibility of the +overall control scheme. obstacle avoidance is incorporated by means of a high-level corridor planner based on +dynamic programming, which generates convex constraints for the controllers according to the current position of +opponents and the track layout. the control performance is investigated experimentally using 1:43 scale rc race +cars, driven at speeds of more than 3 m/s and in operating regions with saturated rear tire forces (drifting). the +algorithms run at 50 hz sampling rate on embedded computing platforms, demonstrating the real-time feasibility +and high performance of optimization-based approaches for autonomous racing.",3 +"abstract: polarization-division multiplexed (pdm) transmission based on the nonlinear fourier +transform (nft) is proposed for optical fiber communication. the nft algorithms are generalized +from the scalar nonlinear schrödinger equation for one polarization to the manakov system +for two polarizations. the transmission performance of the pdm nonlinear frequency-division +multiplexing (nfdm) and pdm orthogonal frequency-division multiplexing (ofdm) are +determined. it is shown that the transmission performance in terms of q-factor is approximately +the same in pdm-nfdm and single polarization nfdm at twice the data rate and that the +polarization-mode dispersion does not seriously degrade system performance. compared with +pdm-ofdm, pdm-nfdm achieves a q-factor gain of 6.4 db. the theory can be generalized to +multi-mode fibers in the strong coupling regime, paving the way for the application of the nft +to address the nonlinear effects in space-division multiplexing. +© 2017 optical society of america +ocis codes: (060.2330) fiber optics communications,(060.4230) multiplexing, (060.4370) nonlinear optics, fibers",7 +"abstract. in this paper we propose an algorithm for the numerical +solution of arbitrary differential equations of fractional order. the algorithm is obtained by using the following decomposition of the differential +equation into a system of differential equation of integer order connected +with inverse forms of abel-integral equations. the algorithm is used for +solution of the linear and non-linear equations.",5 +"abstract. the klee’s measure of n axis-parallel boxes in rd is the +volume of their union. it can be computed in time within o(nd/2 ) in +the worst case. we describe three techniques to boost its computation: +one based on some type of “degeneracy” of the input, and two ones +on the inherent “easiness” of the structure of the input. the first technique benefits from instances where the maxima of the input is of small +size h, and yields a solution running in time within o(n log2d−2 h + +hd/2 ) ⊆ o(nd/2 ). the second technique takes advantage of instances +where no d-dimensional axis-aligned hyperplane intersects more than k +boxes in some dimension, and yields a solution running in time within +o(n log n+nk(d−2)/2 ) ⊆ o(nd/2 ). the third technique takes advantage of +instances where the intersection graph of the input has small treewidth ω. +it yields an algorithm running in time within o(n4 ω log ω+n(ω log ω)d/2 ) +in general, and in time within o(n log n + nω d/2 ) if an optimal tree decomposition of the intersection graph is given. we show how to combine +these techniques in an algorithm which takes advantage of all three configurations.",8 +"abstract—using a drone as an aerial base station (abs) +to provide coverage to users on the ground is envisaged as +a promising solution for beyond fifth generation (beyond-5g) +wireless networks. while the literature to date has examined +downlink cellular networks with abss, we consider an uplink +cellular network with an abs. specifically, we analyze the use of +an underlay abs to provide coverage for a temporary event, such +as a sporting event or a concert in a stadium. using stochastic +geometry, we derive the analytical expressions for the uplink +coverage probability of the terrestrial base station (tbs) and +the abs. the results are expressed in terms of (i) the laplace +transforms of the interference power distribution at the tbs +and the abs and (ii) the distance distribution between the abs +and an independently and uniformly distributed (i.u.d.) abssupported user equipment and between the abs and an i.u.d. +tbs-supported user equipment. the accuracy of the analytical +results is verified by monte carlo simulations. our results show +that varying the abs height leads to a trade-off between the +uplink coverage probability of the tbs and the abs. in addition, +assuming a quality of service of 90% at the tbs, an uplink +coverage probability of the abs of over 85% can be achieved, +with the abs deployed at or below its optimal height of typically +between 250 − 500 m for the considered setup.",7 +"abstract +many evolutionary and constructive heuristic approaches have been +introduced in order to solve the traveling thief problem (ttp). however, the accuracy of such approaches is unknown due to their inability +to find global optima. in this paper, we propose three exact algorithms +and a hybrid approach to the ttp. we compare these with state-of-theart approaches to gather a comprehensive overview on the accuracy of +heuristic methods for solving small ttp instances.",8 +"abstract +the quest for algorithms that enable cognitive abilities is an important part of +machine learning. a common trait in many recently investigated cognitive-like +tasks is that they take into account different data modalities, such as visual and +textual input. in this paper we propose a novel and generally applicable form +of attention mechanism that learns high-order correlations between various data +modalities. we show that high-order correlations effectively direct the appropriate +attention to the relevant elements in the different data modalities that are required +to solve the joint task. we demonstrate the effectiveness of our high-order attention +mechanism on the task of visual question answering (vqa), where we achieve +state-of-the-art performance on the standard vqa dataset.",2 +"abstract +very important breakthroughs in data-centric machine learning algorithms led to impressive performance in ‘transactional’ +point applications such as detecting anger in speech, alerts from a face recognition system, or ekg interpretation. nontransactional applications, e.g. medical diagnosis beyond the ekg results, require ai algorithms that integrate deeper and +broader knowledge in their problem-solving capabilities, e.g. integrating knowledge about anatomy and physiology of the heart +with ekg results and additional patient’s findings. similarly, for military aerial interpretation, where knowledge about enemy +doctrines on force composition and spread helps immensely in situation assessment beyond image recognition of individual +objects. +an initiative is proposed to build wikipedia for smart machines, meaning target readers are not human, but rather smart +machines. named rekopedia, the goal is to develop methodologies, tools, and automatic algorithms to convert humanity +knowledge that we all learn in schools, universities and during our professional life into reusable knowledge structures that +smart machines can use in their inference algorithms. ideally, rekopedia would be an open source shared knowledge repository +similar to the well-known shared open source software code repositories. +the double deep learning approach advocates integrating data-centric machine self-learning techniques with machineteaching techniques to leverage the power of both and overcome their corresponding limitations. for illustration, an outline of a +$15m project is described to produce reko knowledge modules for medical diagnosis of about 1,000 disorders. +ai applications that are based solely on data-centric machine learning algorithms are typically point solutions for transactional +tasks that do not lend themselves to automatic generalization beyond the scope of the data sets they are based on. today’s ai +industry is fragmented, and we are not establishing broad and deep enough foundations that will enable us to build higher level +‘generic’, ‘universal’ intelligence, let alone ‘super-intelligence’. we must find ways to create synergies between these fragments +and connect them with external knowledge sources, if we wish to scale faster the ai industry. +examples in the article are based on- or inspired by- real-life non-transactional ai systems i deployed over decades of ai career +that benefit hundreds of millions of people around the globe. we are now in the second ai ‘spring’ after a long ai ‘winter’. +to avoid sliding again into an ai winter, it is essential that we rebalance the roles of data and knowledge. data is +important but knowledge- deep and commonsense- are equally important.",2 +"abstract +we demonstrate that the integrality gap of the natural cut-based lp relaxation for the directed steiner +tree problem is o(log k) in quasi-bipartite graphs with k terminals. such instances can be seen to +generalize set cover, so the integrality gap analysis is tight up to a constant factor. a novel aspect +of our approach is that we use the primal-dual method; a technique that is rarely used in designing +approximation algorithms for network design problems in directed graphs.",8 +"abstract +recent approaches based on artificial neural networks (anns) have shown promising results for named-entity recognition +(ner). in order to achieve high performances, anns need to be trained on a +large labeled dataset. however, labels +might be difficult to obtain for the dataset +on which the user wants to perform ner: +label scarcity is particularly pronounced +for patient note de-identification, which +is an instance of ner. in this work, we +analyze to what extent transfer learning +may address this issue. in particular, +we demonstrate that transferring an ann +model trained on a large labeled dataset to +another dataset with a limited number of +labels improves upon the state-of-the-art +results on two different datasets for patient +note de-identification.",2 +"abstract: we discuss relations between residual networks (resnet), recurrent neural networks (rnns) and +the primate visual cortex. we begin with the observation that a shallow rnn is exactly equivalent to a very deep +resnet with weight sharing among the layers. a direct implementation of such a rnn, although having orders +of magnitude fewer parameters, leads to a performance similar to the corresponding resnet. we propose 1) a +generalization of both rnn and resnet architectures and 2) the conjecture that a class of moderately deep rnns +is a biologically-plausible model of the ventral stream in visual cortex. we demonstrate the effectiveness of the +architectures by testing them on the cifar-10 dataset.",9 +"abstract. we study the ramification theory for actions involving group schemes, focusing on the tame ramification. we consider the notion of tame quotient stack introduced +in [aov] and the one of tame action introduced in [cept]. we establish a local slice +theorem for unramified actions and after proving some interesting lifting properties for +linearly reductive group schemes, we establish a slice theorem for actions by commutative group schemes inducing tame quotient stacks. roughly speaking, we show that +these actions are induced from an action of an extension of the inertia group on a finitely +presented flat neighborhood. we finally consider the notion of tame action and determine +how this notion is related to the one of tame quotient stack previously considered.",0 +"abstract. we propose a new statistical procedure able in some way to overcome the curse +of dimensionality without structural assumptions on the function to estimate. it relies on +a least-squares type penalized criterion and a new collection of models built from hyperbolic +biorthogonal wavelet bases. we study its properties in a unifying intensity estimation framework, +where an oracle-type inequality and adaptation to mixed smoothness are shown to hold. besides, +we describe an algorithm for implementing the estimator with a quite reasonable complexity.",10 +"abstract +the phenomenon of entropy concentration provides strong support for the maximum entropy method, maxent, for inferring a probability vector from information +in the form of constraints. here we extend this phenomenon, in a discrete setting, to +non-negative integral vectors not necessarily summing to 1. we show that linear constraints that simply bound the allowable sums suffice for concentration to occur even +in this setting. this requires a new, ‘generalized’ entropy measure in which the sum of +the vector plays a role. we measure the concentration in terms of deviation from the +maximum generalized entropy value, or in terms of the distance from the maximum +generalized entropy vector. we provide non-asymptotic bounds on the concentration +in terms of various parameters, including a tolerance on the constraints which ensures +that they are always satisfied by an integral vector. generalized entropy maximization +is not only compatible with ordinary maxent, but can also be considered an extension of it, as it allows us to address problems that cannot be formulated as maxent +problems.",10 +"abstract +our purpose in this study was to present an integral-transform approach +to the analytical solutions of the pennes' bioheat transfer equation and to apply it +to the calculation of temperature distribution in tissues in hyperthermia with +magnetic nanoparticles (magnetic hyperthermia). +the validity of our method was investigated by comparison with the +analytical solutions obtained by the green's function method for point and shell +heat sources and the numerical solutions obtained by the finite-difference +method for gaussian-distributed and step-function sources. +there was good agreement between the radial profiles of temperature +calculated by our method and those obtained by the green's function method. +there was also good agreement between our method and the finite-difference +method except for the central temperature for a step-function source that had +approximately a 0.3% difference. we also found that the equations describing +the steady-state solutions for point and shell sources obtained by our method +agreed with those obtained by the green’s function method. these results +appear to indicate the validity of our method. +in conclusion, we presented an integral-transform approach to the +bioheat transfer problems in magnetic hyperthermia, and this study +demonstrated the validity of our method. the analytical solutions presented in +this study will be useful for gaining some insight into the heat diffusion process +during magnetic hyperthermia, for testing numerical codes and/or more +2",5 +"abstract—in this paper, we address the problem of the +distributed multi-target tracking with labeled set filters in the +framework of generalized covariance intersection (gci). our +analyses show that the label space mismatching (ls-dm) phenomenon, which means the same realization drawn from label +spaces of different sensors does not have the same implication, +is quite common in practical scenarios and may bring serious +problems. our contributions are two-fold. firstly, we provide a +principled mathematical definition of “label spaces matching (lsdm)” based on information divergence, which is also referred +to as ls-m criterion. then, to handle the ls-dm, we propose a +novel two-step distributed fusion algorithm, named as gci fusion +via label spaces matching (gci-lsm). the first step is to match +the label spaces from different sensors. to this end, we build a +ranked assignment problem and design a cost function consistent +with ls-m criterion to seek the optimal solution of matching +correspondence between label spaces of different sensors. the +second step is to perform the gci fusion on the matched label +space. we also derive the gci fusion with generic labeled multiobject (lmo) densities based on ls-m, which is the foundation +of labeled distributed fusion algorithms. simulation results for +gaussian mixture implementation highlight the performance of +the proposed gci-lsm algorithm in two different tracking +scenarios.",3 +"abstract. we answer a question of celikbas, dao, and takahashi by establishing the following characterization of gorenstein rings: a commutative +noetherian local ring (r, m) is gorenstein if and only if it admits an integrally +closed m-primary ideal of finite gorenstein dimension. this is accomplished +through a detailed study of certain test complexes. along the way we construct such a test complex that detect finiteness of gorenstein dimension, but +not that of projective dimension.",0 +"abstraction, in which the high-level representations can +amplify aspects of the input that are important for discrimination. these techniques have been used amongst others +to identify network threats [17] or encrypted traffic on a +network [18] [19]. a convolutional neural network (cnn), is +a specialised architecture of ann that employs a convolution +operation in at least one of its layers [20] [21]. a variety of +substantiated cnn architectures have been used to great effect +in computer vision [22] and even natural language processing +(nlp), with empirically distinguished superiority in semantic +matching [23], compared to other models. +cryptoknight is developed in coordination with this +methodology. we introduce a scalable learning system that +can easily incorporate new samples through the scalable synthesis of customisable cryptographic algorithms. its entirely +automated core architecture is aimed to minimise human +interaction, thus allowing the composition of an effective +model. we tested the framework on a number of externally +sourced applications utilising non-library linked functionality. +our experimental analysis indicates that cryptoknight is a +flexible solution that can quickly learn from new cryptographic execution patterns to classify unknown software. this +manuscript presents the following contributions: +• our unique convolutional neural network architecture +fits variable-length data to map an application’s timeinvariant cryptographic execution. +• complimented by procedural synthesis, we address the +issue of this task’s disproportionate latent feature space. +• the realised framework, cryptoknight, has demonstrably +faster results compared to that of previous methodologies, +and is extensively re-trainable. +ii. r elated w ork +the cryptovirological threat model has rapidly evolved over +the last decade. a number of notable individuals and research +groups have attempted to address the problem of cryptographic +primitive identification. we will discuss the consequences of +their findings here and address intrinsic problems. +a. heuristics +heuristical methods [24] are often utilised to locate an +optimal strategy for capturing the most appropriate solution. +these measures have previously shown great success in cryptographic primitive identification. a joint project from eth +zürich and google, inc. [8] detailed the automated decryption +of encrypted network communication in memory, to identify +the location and time a subject binary interacted with decrypted input. from an execution trace which dynamically +extracted memory access patterns and control flow data [8], +was able to identify the necessary factors required to retrieve +the relevant data in a new process. his implementation was",9 +"abstract +goal recognition is the problem of inferring the +goal of an agent, based on its observed actions. an +inspiring approach—plan recognition by planning +(prp)—uses off-the-shelf planners to dynamically +generate plans for given goals, eliminating the need +for the traditional plan library. however, existing +prp formulation is inherently inefficient in online +recognition, and cannot be used with motion planners for continuous spaces. in this paper, we utilize a different prp formulation which allows for +online goal recognition, and for application in continuous spaces. we present an online recognition +algorithm, where two heuristic decision points may +be used to improve run-time significantly over existing work. we specify heuristics for continuous +domains, prove guarantees on their use, and empirically evaluate the algorithm over n hundreds of +experiments in both a 3d navigational environment +and a cooperative robotic team task.",2 +"abstract +remote sensing satellite data offer the unique possibility to map land use land cover transformations by +providing spatially explicit information. however, detection of short-term processes and land use patterns +of high spatial-temporal variability is a challenging task. +we present a novel framework using multi-temporal terrasar-x data and machine learning techniques, +namely discriminative markov random fields with spatio-temporal priors, and import vector machines, in +order to advance the mapping of land cover characterized by short-term changes. our study region covers +a current deforestation frontier in the brazilian state pará with land cover dominated by primary forests, +different types of pasture land and secondary vegetation, and land use dominated by short-term processes +such as slash-and-burn activities. the data set comprises multi-temporal terrasar-x imagery acquired +over the course of the 2014 dry season, as well as optical data (rapideye, landsat) for reference. results +show that land use land cover is reliably mapped, resulting in spatially adjusted overall accuracies of up to +79% in a five class setting, yet limitations for the differentiation of different pasture types remain. +the proposed method is applicable on multi-temporal data sets, and constitutes a feasible approach to map +land use land cover in regions that are affected by high-frequent temporal changes. +keywords: markov random fields (mrf), import vector machines (ivm), multi-temporal lulc +mapping, deforestation, amazon, sar",1 +"abstract +for undirected graphs g  (v, e) and g0  (v0 , e0 ), say that g is a region intersection +graph over g0 if there is a family of connected subsets {r u ⊆ v0 : u ∈ v } of g0 such that +{u, v} ∈ e ⇐⇒ r u ∩ r v , ∅. +we show if g0 excludes the complete graph k h as a minor for some h > 1, then every region +√ +intersection graph g over g0 with m edges has a balanced separator with at most c h m nodes, +where c h is a constant depending only on h. if g additionally has uniformly bounded vertex +degrees, then such a separator is found by spectral partitioning. +a string graph is the intersection graph of continuous arcs in the plane. string graphs are +precisely region intersection graphs over planar graphs. thus the preceding result implies that +√ +every string graph with m edges has a balanced separator of size o( m). this bound is optimal, +as it generalizes the planar separator theorem. it confirms a conjecture of fox and pach (2010), +√ +and improves over the o( m log m) bound of matoušek (2013).",8 +"abstract +cell search is the process for a user to detect its neighboring base stations (bss) and make a +cell selection decision. due to the importance of beamforming gain in millimeter wave (mmwave) +and massive mimo cellular networks, the directional cell search delay performance is investigated. a +cellular network with fixed bs and user locations is considered, so that strong temporal correlations +exist for the sinr experienced at each bs and user. for poisson cellular networks with rayleigh fading +channels, a closed-form expression for the spatially averaged mean cell search delay of all users is +derived. this mean cell search delay for a noise-limited network (e.g., mmwave network) is proved to +be infinite whenever the non-line-of-sight (nlos) path loss exponent is larger than 2. for interferencelimited networks, a phase transition for the mean cell search delay is shown to exist in terms of the +number of bs antennas/beams m : the mean cell search delay is infinite when m is smaller than a +threshold and finite otherwise. beam-sweeping is also demonstrated to be effective in decreasing the +cell search delay, especially for the cell edge users.",7 +abstract,5 +"abstract. we deduce properties of the koopman representation of a positive entropy probability measurepreserving action of a countable, discrete, sofic group. our main result may be regarded as a “representationtheoretic” version of sinaı̌’s factor theorem. we show that probability measure-preserving actions with +completely positive entropy of an infinite sofic group must be mixing and, if the group is nonamenable, +have spectral gap. this implies that if γ is a nonamenable group and γ y (x, µ) is a probability measurepreserving action which is not strongly ergodic, then no action orbit equivalent to γ y (x, µ) has completely +positive entropy. crucial to these results is a formula for entropy in the presence of a polish, but a priori +noncompact, model.",4 +"abstraction and systems language design +c. jasson casey∗ , andrew sutton† , gabriel dos reis† , alex sprintson∗ +∗ department",6 +"abstract +we discuss actions of free groups on the circle with “ping-pong” dynamics; these are dynamics +determined by a finite amount of combinatorial data, analogous to schottky domains or markov +partitions. using this, we show that the free group fn admits an isolated circular order if and +only if n is even, in stark contrast with the case for linear orders. this answers a question from +[21]. inspired by work in [2], we also exhibit examples of “exotic” isolated points in the space of +all circular orders on f2 . analogous results are obtained for linear orders on the groups fn × z.1",4 +"abstract +we present a procedure for computing the convolution of exponential signals without the need of +solving integrals or summations. the procedure requires the resolution of a system of linear equations +involving vandermonde matrices. we apply the method to solve ordinary differential/difference equations +with constant coefficients.",3 +"abstract—secure multi-party computation (mpc) enables a +set of mutually distrusting parties to cooperatively compute, +using a cryptographic protocol, a function over their private data. this paper presents w ys? , a new domain-specific +language (dsl) implementation for writing mpcs. w ys? +is a verified, domain-specific integrated language extension +(vdsile), a new kind of embedded dsl hosted in f? , a fullfeatured, verification-oriented programming language. w ys? +source programs are essentially f? programs written against +an mpc library, meaning that programmers can use f? ’s +logic to verify the correctness and security properties of their +programs. to reason about the distributed semantics of these +programs, we formalize a deep embedding of w ys? , also in +f? . we mechanize the necessary metatheory to prove that the +properties verified for the w ys? source programs carry over +to the distributed, multi-party semantics. finally, we use f? ’s +extraction mechanism to extract an interpreter that we have +proved matches this semantics, yielding a verified implementation. w ys? is the first dsl to enable formal verification of +source mpc programs, and also the first mpc dsl to provide +a verified implementation. with w ys? we have implemented +several mpc protocols, including private set intersection, joint +median, and an mpc-based card dealing application, and have +verified their security and correctness.",6 +"abstract +antifragile systems grow measurably better in the +presence of hazards. this is in contrast to fragile +systems which break down in the presence of hazards, robust systems that tolerate hazards up to a +certain degree, and resilient systems that – like selfhealing systems – revert to their earlier expected +behavior after a period of convalescence. the notion of antifragility was introduced by taleb for +economics systems, but its applicability has been +illustrated in biological and engineering domains +as well. in this paper, we propose an architecture +that imparts antifragility to intelligent autonomous +systems, specifically those that are goal-driven and +based on ai-planning. we argue that this architecture allows the system to self-improve by uncovering new capabilities obtained either through the +hazards themselves (opportunistic) or through deliberation (strategic). an ai planning-based case +study of an autonomous wheeled robot is presented. +we show that with the proposed architecture, the +robot develops antifragile behaviour with respect to +an oil spill hazard.",2 +"abstract +the objective of clustering is to discover natural groups in datasets and to identify geometrical structures which might reside there, without assuming any prior knowledge on the +characteristics of the data. the problem can be seen as detecting the inherent separations +between groups of a given point set in a metric space governed by a similarity function. the +pairwise similarities between all data objects form a weighted graph adjacency matrix which +contains all necessary information for the clustering process, which can consequently be formulated as a graph partitioning problem. in this context, we propose a new cluster quality measure +which uses the maximum spanning tree and allows us to compute the optimal clustering under +the min-max principle in polynomial time. our algorithm can be applied when a load-balanced +clustering is required.",8 +"abstract. we investigate a possible connection between the f sz properties +of a group and its sylow subgroups. we show that the simple groups g2 (5) +and s6 (5), as well as all sporadic simple groups with order divisible by 56 are +not f sz, and that neither are their sylow 5-subgroups. the groups g2 (5) +and hn were previously established as non-f sz by peter schauenburg; we +present alternative proofs. all other sporadic simple groups and their sylow +subgroups are shown to be f sz. we conclude by considering all perfect groups +available through gap with order at most 106 , and show they are non-f sz +if and only if their sylow 5-subgroups are non-f sz.",4 +"abstract +while most scene flow methods use either variational optimization or a strong rigid motion assumption, we show for +the first time that scene flow can also be estimated by dense +interpolation of sparse matches. to this end, we find sparse +matches across two stereo image pairs that are detected +without any prior regularization and perform dense interpolation preserving geometric and motion boundaries by +using edge information. a few iterations of variational energy minimization are performed to refine our results, which +are thoroughly evaluated on the kitti benchmark and additionally compared to state-of-the-art on mpi sintel. for +application in an automotive context, we further show that +an optional ego-motion model helps to boost performance +and blends smoothly into our approach to produce a segmentation of the scene into static and dynamic parts.",1 +"abstract +we investigate the problem of testing the equivalence between two discrete histograms. a +k-histogram over [n] is a probability distribution that is piecewise constant over some set of k +intervals over [n]. histograms have been extensively studied in computer science and statistics. +given a set of samples from two k-histogram distributions p, q over [n], we want to distinguish +(with high probability) between the cases that p = q and kp − qk1 ≥ ǫ. the main contribution +of this paper is a new algorithm for this testing problem and a nearly matching informationtheoretic lower bound. specifically, the sample complexity of our algorithm matches our lower +bound up to a logarithmic factor, improving on previous work by polynomial factors in the +relevant parameters. our algorithmic approach applies in a more general setting and yields +improved sample upper bounds for testing closeness of other structured distributions as well.",10 +"abstract +haskell provides type-class-bounded and parametric polymorphism as opposed to subtype +polymorphism of object-oriented languages such as java and ocaml. it is a contentious +question whether haskell 98 without extensions, or with common extensions, or with +new extensions can fully support conventional object-oriented programming with encapsulation, mutable state, inheritance, overriding, statically checked implicit and explicit +subtyping, and so on. +in a first phase, we demonstrate how far we can get with object-oriented functional +programming, if we restrict ourselves to plain haskell 98. in the second and major phase, +we systematically substantiate that haskell 98, with some common extensions, supports +all the conventional oo features plus more advanced ones, including first-class lexically +scoped classes, implicitly polymorphic classes, flexible multiple inheritance, safe downcasts +and safe co-variant arguments. haskell indeed can support width and depth, structural +and nominal subtyping. we address the particular challenge to preserve haskell’s type +inference even for objects and object-operating functions. advanced type inference is a +strength of haskell that is worth preserving. many of the features we get “for free”: the +type system of haskell turns out to be a great help and a guide rather than a hindrance. +the oo features are introduced in haskell as the oohaskell library, non-trivially +based on the hlist library of extensible polymorphic records with first-class labels and +subtyping. the library sample code, which is patterned after the examples found in oo +textbooks and programming language tutorials, including the ocaml object tutorial, +demonstrates that oo code translates into oohaskell in an intuition-preserving way: +essentially expression-by-expression, without requiring global transformations. +oohaskell lends itself as a sandbox for typed oo language design. +keywords: object-oriented functional programming, object type inference, typed objectoriented language design, heterogeneous collections, ml-art, mutable objects, typeclass-based programming, haskell, haskell 98, structural subtyping, duck typing, nominal subtyping, width subtyping, deep subtyping, co-variance",2 +"abstract +we refine the general methodology in [1] for the construction and analysis of essentially minimax estimators for a wide class +of functionals of finite dimensional parameters, and elaborate on the case of discrete distributions with support size s comparable +with the number of observations n. specifically, we determine the “smooth” and “non-smooth” regimes based on the confidence +set and the smoothness of the functional. in the “non-smooth” regime, we apply an unbiased estimator for a suitable polynomial +approximation of the functional. in the “smooth” regime, we construct a general version of the bias-corrected maximum likelihood +estimator (mle) based on taylor expansion. +we apply the general methodology to the problem of estimating the kl divergence between two discrete probability measures +p and q from empirical data in a non-asymptotic and possibly large alphabet setting. we construct minimax rate-optimal estimators +for d(p kq) when the likelihood ratio is upper bounded by a constant which may depend on the support size, and show that +the performance of the optimal estimator with n samples is essentially that of the mle with n ln n samples. our estimator is +adaptive in the sense that it does not require the knowledge of the support size nor the upper bound on the likelihood ratio. we +show that the general methodology results in minimax rate-optimal estimators for other divergences as well, such as the hellinger +distance and the χ2 -divergence. our approach refines the approximation methodology recently developed for the construction of +near minimax estimators of functionals of high-dimensional parameters, such as entropy, rényi entropy, mutual information and +`1 distance in large alphabet settings, and shows that the effective sample size enlargement phenomenon holds significantly more +widely than previously established. +index terms +divergence estimation, kl divergence, multivariate approximation theory, taylor expansion, functional estimation, maximum +likelihood estimator, high dimensional statistics, minimax lower bound",10 +"abstract—we introduce learned attention models into the radio machine learning domain for the task of modulation recognition by leveraging spatial transformer networks and introducing +new radio domain appropriate transformations. this attention +model allows the network to learn a localization network capable +of synchronizing and normalizing a radio signal blindly with zero +knowledge of the signal’s structure based on optimization of the +network for classification accuracy, sparse representation, and +regularization. using this architecture we are able to outperform +our prior results in accuracy vs signal to noise ratio against an +identical system without attention, however we believe such an +attention model has implication far beyond the task of modulation +recognition.",3 +"abstract +planar ornaments, a.k.a. wallpapers, are regular repetitive patterns which exhibit translational symmetry +in two independent directions. there are exactly 17 distinct planar symmetry groups. we present a fully +automatic method for complete analysis of planar ornaments in 13 of these groups, specifically, the groups +called p6m, p6, p4g, p4m, p4, p31m, p3m, p3, cmm, pgg, pg, p2 and p1. given the image of an ornament +fragment, we present a method to simultaneously classify the input into one of the 13 groups and extract +the so called fundamental domain (fd), the minimum region that is sufficient to reconstruct the entire +ornament. a nice feature of our method is that even when the given ornament image is a small portion +such that it does not contain multiple translational units, the symmetry group as well as the fundamental +domain can still be defined. this is because, in contrast to common approach, we do not attempt to first +identify a global translational repetition lattice. though the presented constructions work for quite a wide +range of ornament patterns, a key assumption we make is that the perceivable motifs (shapes that repeat) +alone do not provide clues for the underlying symmetries of the ornament. in this sense, our main target is +the planar arrangements of asymmetric interlocking shapes, as in the symmetry art of escher. +keywords: ornaments, wallpaper groups, mosaics, regular patterns, escher style planar patterns",1 +"abstract—recently, an idling mechanism has been introduced +in the context of distributed first order methods for minimization +of a sum of nodes’ local convex costs over a generic, connected +network. with the idling mechanism, each node i, at each +iteration k, is active – updates its solution estimate and exchanges +messages with its network neighborhood – with probability pk , +and it stays idle with probability 1 − pk , while the activations +are independent both across nodes and across iterations. in +this paper, we demonstrate that the idling mechanism can be +successfully incorporated in distributed second order methods +also. specifically, we apply the idling mechanism to the recently +proposed distributed quasi newton method (dqn). we first +show theoretically that, when pk grows to one across iterations +in a controlled manner, dqn with idling exhibits very similar +theoretical convergence and convergence rates properties as +the standard dqn method, thus achieving the same order of +convergence rate (r-linear) as the standard dqn, but with +significantly cheaper updates. simulation examples confirm the +benefits of incorporating the idling mechanism, demonstrate the +method’s flexibility with respect to the choice of the pk ’s, and +compare the proposed idling method with related algorithms +from the literature. +index terms—distributed optimization, variable sample +schemes, second order methods, newton-like methods, linear +convergence.",7 +"abstract +this paper considers recovering l-dimensional vectors w, and x1 , x2 , . . . , xn from their +circular convolutions yn = w ∗ xn , n = 1, 2, 3, . . . , n . the vector w is assumed to be s-sparse +in a known basis that is spread out in the fourier domain, and each input xn is a member of a +known k-dimensional random subspace. +we prove that whenever k + s log2 s . l/ log4 (ln ), the problem can be solved effectively +by using only the nuclear-norm minimization as the convex relaxation, as long as the inputs are +sufficiently diverse and obey n & log2 (ln ). by “diverse inputs”, we mean that the xn ’s belong +to different, generic subspaces. to our knowledge, this is the first theoretical result on blind +deconvolution where the subspace to which w belongs is not fixed, but needs to be determined. +we discuss the result in the context of multipath channel estimation in wireless communications. both the fading coefficients, and the delays in the channel impulse response w are +unknown. the encoder codes the k-dimensional message vectors randomly and then transmits +coded messages xn ’s over a fixed channel one after the other. the decoder then discovers all of +the messages and the channel response when the number of samples taken for each received message are roughly greater than (k + s log2 s) log4 (ln ), and the number of messages is roughly +at least log2 (ln ).",7 +"abstract +this is an account of the theory of jsj decompositions of finitely generated groups, +as developed in the last twenty years or so. +we give a simple general definition of jsj decompositions (or rather of their bassserre trees), as maximal universally elliptic trees. in general, there is no preferred jsj +decomposition, and the right object to consider is the whole set of jsj decompositions, +which forms a contractible space: the jsj deformation space (analogous to outer +space). +we prove that jsj decompositions exist for any finitely presented group, without +any assumption on edge groups. when edge groups are slender, we describe flexible vertices of jsj decompositions as quadratically hanging extensions of 2-orbifold +groups. +similar results hold in the presence of acylindricity, in particular for splittings +of torsion-free csa groups over abelian groups, and splittings of relatively hyperbolic +groups over virtually cyclic or parabolic subgroups. using trees of cylinders, we obtain +canonical jsj trees (which are invariant under automorphisms). +we introduce a variant in which the property of being universally elliptic is replaced +by the more restrictive and rigid property of being universally compatible. this yields +a canonical compatibility jsj tree, not just a deformation space. we show that it +exists for any finitely presented group. +we give many examples, and we work throughout with relative decompositions +(restricting to trees where certain subgroups are elliptic).",4 +"abstract +we study the problem of compressing recurrent neural networks (rnns). in particular, we focus on the compression +of rnn acoustic models, which are motivated by the goal +of building compact and accurate speech recognition systems +which can be run efficiently on mobile devices. in this work, +we present a technique for general recurrent model compression that jointly compresses both recurrent and non-recurrent +inter-layer weight matrices. we find that the proposed technique allows us to reduce the size of our long short-term +memory (lstm) acoustic model to a third of its original size +with negligible loss in accuracy. +index terms— model compression, lstm, rnn, svd, +embedded speech recognition +1. introduction +neural networks (nns) with multiple feed-forward [1, 2] or +recurrent hidden layers [3, 4] have emerged as state-of-theart acoustic models (ams) for automatic speech recognition +(asr) tasks. advances in computational capabilities coupled +with the availability of large annotated speech corpora have +made it possible to train nn-based ams with a large number +of parameters [5] with great success. +as speech recognition technologies continue to improve, +they are becoming increasingly ubiquitous on mobile devices: +voice assistants such as apple’s siri, microsoft’s cortana, +amazon’s alexa and google now [6] enable users to search +for information using their voice. although the traditional +model for these applications has been to recognize speech +remotely on large servers, there has been growing interest +in developing asr technologies that can recognize the input speech directly “on-device” [7]. this has the promise to +reduce latency while enabling user interaction even in cases +where a mobile data connection is either unavailable, slow +or unreliable. some of the main challenges in this regard +are the disk, memory and computational constraints imposed +by these devices. since the number of operations in neural +† equal contribution. the authors would like to thank haşim sak and +raziel alvarez for helpful comments and suggestions on this work, and chris +thornton and yu-hsin chen for comments on an earlier draft.",9 +"abstract. we present a probabilistic extension of the description logic alc for +reasoning about statistical knowledge. we consider conditional statements over +proportions of the domain and are interested in the probabilistic-logical consequences of these proportions. after introducing some general reasoning problems +and analyzing their properties, we present first algorithms and complexity results +for reasoning in some fragments of statistical alc.",2 +"abstract +we classify the linearly reductive finite subgroup schemes g of +sl2 = sl(v ) over an algebraically closed field k of positive characteristic, up to conjugation. as a corollary, we prove that such g is in oneto-one correspondence with an isomorphism class of two-dimensional +f -rational gorenstein complete local rings with the coefficient field k +by the correspondence g 7→ ((sym v )g )b.",0 +"abstract +we show that the question whether a term is typable is decidable for type systems combining inclusion polymorphism with parametric polymorphism provided +the type constructors are at most unary. to prove this result we first reduce the +typability problem to the problem of solving a system of type inequations. the +result is then obtained by showing that the solvability of the resulting system of +type inequations is decidable.",6 +"abstract +this report presents jartege, a tool which allows random generation of unit tests for java +classes specified in jml. jml (java modeling language) is a specification language for +java which allows one to write invariants for classes, and pre- and postconditions for +operations. as in the jml-junit tool, we use jml specifications on the one hand to +eliminate irrelevant test cases, and on the other hand as a test oracle. jartege randomly +generates test cases, which consist of a sequence of constructor and method calls for the +classes under test. the random aspect of the tool can be parameterized by associating +weights to classes and operations, and by controlling the number of instances which are +created for each class under test. the practical use of jartege is illustrated by a small case +study. +keywords +testing, unit testing, random generation of test cases, java, jml",2 +"abstract—this work is motivated by the problem of error +correction in bit-shift channels with the so-called (d, k) input +constraints (where successive 1’s are required to be separated +by at least d and at most k zeros, 0 ≤ d < k ≤ ∞). bounds +on the size of optimal (d, k)-constrained codes correcting a +fixed number of bit-shifts are derived. the upper bound is +obtained by a packing argument, while the lower bound follows +from a construction based on a family of integer lattices. +several properties of (d, k)-constrained sequences that may be +of independent interest are established as well; in particular, +the capacity of the noiseless channel with (d, k)-constrained +constant-weight inputs is characterized. the results are relevant +for magnetic and optical storage systems, reader-to-tag rfid +channels, and other communication models where bit-shift errors +are dominant and where (d, k)-constrained sequences are used +for modulation. +index terms—bit-shift channel, peak shift, timing errors, +runlength-limited code, integer compositions, manhattan metric, +asymmetric distance, magnetic recording, inductive coupling.",7 +"abstract—ant colony system (acs) is a distributed (agentbased) algorithm which has been widely studied on the +symmetric travelling salesman problem (tsp). the optimum +parameters for this algorithm have to be found by trial and +error. we use a particle swarm optimization algorithm (pso) +to optimize the acs parameters working in a designed subset +of tsp instances. first goal is to perform the hybrid pso-acs +algorithm on a single instance to find the optimum parameters +and optimum solutions for the instance. second goal is to +analyze those sets of optimum parameters, in relation to +instance characteristics. computational results have shown +good quality solutions for single instances though with high +computational times, and that there may be sets of parameters +that work optimally for a majority of instances. +i. introduction",9 +"abstract— the analysis and interpretation of relationships +between biological molecules is done with the help of networks. +networks are used ubiquitously throughout biology to represent +the relationships between genes and gene products. network +models have facilitated a shift from the study of evolutionary +conservation between individual gene and gene products towards +the study of conservation at the level of pathways and complexes. +recent work has revealed much about chemical reactions inside +hundreds of organisms as well as universal characteristics of +metabolic networks, which shed light on the evolution of the +networks. however, characteristics of individual metabolites +have been neglected in this network. the current paper provides +an overview of bioinformatics software used in visualization of +biological networks using proteomic data, their main functions +and limitations of the software. +keywords- metabolic network; protein interaction network; +visualization tools.",5 +"abstract. we note a generalization of whyte’s geometric solution to the von +neumann problem for locally compact groups in terms of borel and clopen +piecewise translations. this strengthens a result of paterson on the existence +of borel paradoxical decompositions for non-amenable locally compact groups. +along the way, we study the connection between some geometric properties of +coarse spaces and certain algebraic characteristics of their wobbling groups.",4 +"abstract +this paper focuses on the modal analysis of laminated glass beams. in these +multilayer elements, the stiff glass plates are connected by compliant interlayers with frequency/temperature-dependent behavior. the aim of our study +is (i) to assess whether approximate techniques can accurately predict the +behavior of laminated glass structures and (ii) to propose an easy tool for +modal analysis based on the enhanced effective thickness concept by galuppi +and royer-carfagni. +to this purpose, we consider four approaches to the solution of the related nonlinear eigenvalue problem: a complex-eigenvalue solver based on +the newton method, the modal strain energy method, and two effective +thickness concepts. a comparative study of free vibrating laminated glass +beams is performed considering different geometries of cross-sections, boundary conditions, and material parameters for interlayers under two ambient +temperatures. the viscoelastic response of polymer foils is represented by +the generalized maxwell model. +we show that the simplified approaches predict natural frequencies with +an acceptable accuracy for most of the examples. however, there is a considerable scatter in predicted loss factors. the enhanced effective thickness +approach adjusted for modal analysis leads to lower errors in both quantities +compared to the other two simplified procedures, reducing the extreme error +in loss factors to one half compared to the modal strain energy method or to +one quarter compared to the original dynamic effective thickness method. +keywords: free vibrations, laminated glass, complex dynamic modulus, +dynamic effective thickness, enhanced effective thickness, modal strain +preprint submitted to arxiv",5 +"abstract. unsupervised learning permits the development of algorithms that are +able to adapt to a variety of different data sets using the same underlying rules +thanks to the autonomous discovery of discriminating features during training. +recently, a new class of hebbian-like and local unsupervised learning rules for +neural networks have been developed that minimise a similarity matching costfunction. these have been shown to perform sparse representation learning. this +study tests the effectiveness of one such learning rule for learning features from +images. the rule implemented is derived from a nonnegative classical multidimensional scaling cost-function, and is applied to both single and multi-layer +architectures. the features learned by the algorithm are then used as input to a +svm to test their effectiveness in classification on the established cifar-10 image dataset. the algorithm performs well in comparison to other unsupervised +learning algorithms and multi-layer networks, thus suggesting its validity in the +design of a new class of compact, online learning networks. +keywords: classification; competitive learning; feature learning; hebbian learning; online algorithm; neural networks; sparse coding; unsupervised learning.",1 +"abstract. throughout the last decade, we have seen much progress towards characterising and +computing the minimum hybridisation number for a set p of rooted phylogenetic trees. roughly +speaking, this minimum quantifies the number of hybridisation events needed to explain a set of +phylogenetic trees by simultaneously embedding them into a phylogenetic network. from a mathematical viewpoint, the notion of agreement forests is the underpinning concept for almost all results +that are related to calculating the minimum hybridisation number for when |p| = 2. however, despite various attempts, characterising this number in terms of agreement forests for |p| > 2 remains +elusive. in this paper, we characterise the minimum hybridisation number for when p is of arbitrary +size and consists of not necessarily binary trees. building on our previous work on cherry-picking +sequences, we first establish a new characterisation to compute the minimum hybridisation number +in the space of tree-child networks. subsequently, we show how this characterisation extends to the +space of all rooted phylogenetic networks. moreover, we establish a particular hardness result that +gives new insight into some of the limitations of agreement forests. +key words. agreement forest, cherry-picking sequence, minimum hybridisation, phylogenetic +networks, reticulation, tree-child networks +ams subject classifications. 05c05; 92d15",8 +"abstract +phylogenetic networks are a generalization of phylogenetic trees that allow for the +representation of evolutionary events acting at the population level, like recombination between genes, hybridization between lineages, and lateral gene transfer. while +most phylogenetics tools implement a wide range of algorithms on phylogenetic trees, +there exist only a few applications to work with phylogenetic networks, and there are no +open-source libraries either. in order to improve this situation, we have developed a perl +package that relies on the bioperl bundle and implements many algorithms on phylogenetic networks. we have also developed a java applet that makes use of the aforementioned perl package and allows the user to make simple experiments with phylogenetic +networks without having to develop a program or perl script by herself. the perl +package has been accepted as part of the bioperl bundle. it can be downloaded from +the url http://dmi.uib.es/~gcardona/bioinfo/bio-phylonetwork.tgz. the webbased application is available at the url http://dmi.uib.es/~gcardona/bioinfo/. +the perl package includes full documentation of all its features.",5 +"abstract—various hand-crafted features and metric learning +methods prevail in the field of person re-identification. compared +to these methods, this paper proposes a more general way that +can learn a similarity metric from image pixels directly. by +using a “siamese” deep neural network, the proposed method +can jointly learn the color feature, texture feature and metric in +a unified framework. the network has a symmetry structure with +two sub-networks which are connected by cosine function. to +deal with the big variations of person images, binomial deviance +is used to evaluate the cost between similarities and labels, which +is proved to be robust to outliers. +compared to existing researches, a more practical setting is +studied in the experiments that is training and test on different +datasets (cross dataset person re-identification). both in “intra +dataset” and “cross dataset” settings, the superiorities of the +proposed method are illustrated on viper and prid. +index terms—person re-identification, deep metric learning, +convolutional network, cross dataset",9 +"abstract +the class of ℓq -regularized least squares (lqls) are considered for estimating β ∈ rp from its n noisy +linear observations y = xβ + w. the performance of these schemes are studied under the high-dimensional +asymptotic setting in which the dimension of the signal grows linearly with the number of measurements. +in this asymptotic setting, phase transition diagrams (pt) are often used for comparing the performance +of different estimators. pt specifies the minimum number of observations required by a certain estimator +to recover a structured signal, e.g. a sparse one, from its noiseless linear observations. although phase +transition analysis is shown to provide useful information for compressed sensing, the fact that it ignores +the measurement noise not only limits its applicability in many application areas, but also may lead to +misunderstandings. for instance, consider a linear regression problem in which n > p and the signal is +not exactly sparse. if the measurement noise is ignored in such systems, regularization techniques, such +as lqls, seem to be irrelevant since even the ordinary least squares (ols) returns the exact solution. +however, it is well-known that if n is not much larger than p then the regularization techniques improve the +performance of ols. +in response to this limitation of pt analysis, we consider the low-noise sensitivity analysis. we show that +this analysis framework (i) reveals the advantage of lqls over ols, (ii) captures the difference between +different lqls estimators even when n > p, and (iii) provides a fair comparison among different estimators +in high signal-to-noise ratios. as an application of this framework, we will show that under mild conditions +lasso outperforms other lqls even when the signal is dense. finally, by a simple transformation we +connect our low-noise sensitivity framework to the classical asymptotic regime in which n/p → ∞ and +characterize how and when regularization techniques offer improvements over ordinary least squares, and +which regularizer gives the most improvement when the sample size is large. +key words: high-dimensional linear model, ℓq -regularized least squares, ordinary least squares, +lasso, phase transition, asymptotic mean square error, second-order expansion, classical asymptotics.",10 +"abstract—periodic event-triggered control (petc) [13] is a +version of event-triggered control (etc) that only requires to +measure the plant output periodically instead of continuously. in +this work, we present a construction of timing models for these +petc implementations to capture the dynamics of the traffic they +generate. in the construction, we employ a two-step approach. +we first partition the state space into a finite number of regions. +then in each region, the event-triggering behavior is analyzed +with the help of lmis. the state transitions among different +regions result from computing the reachable state set starting +from each region within the computed event time intervals. +index terms—systems abstractions; periodic event-triggered +control; lmi; formal methods; reachability analysis.",3 +"abstract. submodularity is one of the most important property of combinatorial optimization, and k-submodularity is a generalization of submodularity. maximization of a k-submodular function is np-hard, and +approximation algorithm has been studied. for monotone k-submodular +functions, [iwata, tanigawa, and yoshida 2016] gave k/(2k−1)-approximation +algorithm. in this paper, we give a deterministic algorithm by derandomizing that algorithm. our algorithm is k/(2k−1)-approximation and runs +in polynomial time.",8 +"abstract—in this technical note, we study the controllability of +diffusively coupled networks from a graph theoretic perspective. +we consider leader-follower networks, where the external control +inputs are injected to only some of the agents, namely the leaders. +our main result relates the controllability of such systems to the +graph distances between the agents. more specifically, we present +a graph topological lower bound on the rank of the controllability +matrix. this lower bound is tight, and it is applicable to systems +with arbitrary network topologies, coupling weights, and number +of leaders. an algorithm for computing the lower bound is also +provided. furthermore, as a prominent application, we present +how the proposed bound can be utilized to select a minimal set +of leaders for achieving controllability, even when the coupling +weights are unknown.",3 +"abstract: in this paper, we consider hands-off control via minimization of the clot +(combined l-one and two) norm. the maximum hands-off control is the l0 -optimal (or the +sparsest) control among all feasible controls that are bounded by a specified value and transfer +the state from a given initial state to the origin within a fixed time duration. in general, the +maximum hands-off control is a bang-off-bang control taking values of ±1 and 0. for many real +applications, such discontinuity in the control is not desirable. to obtain a continuous but still +relatively sparse control, we propose to use the clot norm, a convex combination of l1 and +l2 norms. we show by numerical simulation that the clot control is continuous and much +sparser (i.e. has longer time duration on which the control takes 0) than the conventional en +(elastic net) control, which is a convex combination of l1 and squared l2 norms. +keywords: optimal control, convex optimization, sparsity, maximum hands-off control, +bang-off-bang control +1. introduction +sparsity has recently emerged as an important topic in +signal/image processing, machine learning, statistics, etc. +if y ∈ rm and a ∈ rm×n are specified with m < n, then +the equation y = ax is underdetermined and has infinitely +many solutions for x if a has rank m. finding the sparsest +solution (that is, the solution with the fewest number of +nonzero elements) can be formulated as +min kzk0 subject to az = b. +z",3 +"abstract. the compressed zero-divisor graph γc (r) associated with a commutative ring r has +vertex set equal to the set of equivalence classes {[r] | r ∈ z(r), r 6= 0} where r ∼ s whenever +ann(r) = ann(s). distinct classes [r], [s] are adjacent in γc (r) if and only if xy = 0 for all +x ∈ [r], y ∈ [s]. in this paper, we explore the compressed zero-divisor graph associated with quotient +rings of unique factorization domains. specifically, we prove several theorems which exhibit a method +of constructing γ(r) for when one quotients out by a principal ideal, and prove sufficient conditions +for when two such compressed graphs are graph-isomorphic. we show these conditions are not +necessary unless one alters the definition of the compressed graph to admit looped vertices, and +conjecture necessary and sufficient conditions for two compressed graphs with loops to be isomorphic +when considering any quotient ring of a unique factorization domain.",0 +"abstract—in this paper, an efficient control strategy for +physiological interaction based anaesthetic drug infusion model is +explored using the fractional order (fo) proportional integral +derivative (pid) controllers. the dynamic model is composed of +several human organs by considering the brain response to the +anaesthetic drug as output and the drug infusion rate as the +control input. particle swarm optimisation (pso) is employed to +obtain the optimal set of parameters for pid/fopid controller +structures. with the proposed fopid control scheme much less +amount of drug-infusion system can be designed to attain a +specific anaesthetic target and also shows high robustness for +±50% parametric uncertainty in the patient’s brain model. +keywords—anaesthetic drug; dosage control; fractional order +pid controller; physiological organs; pso",3 +"abstract. the main result is an elementary proof of holonomicity for a-hypergeometric +systems, with no requirements on the behavior of their singularities, originally due to +adolphson [ado94] after the regular singular case by gelfand and gelfand [gg86]. our +method yields a direct de novo proof that a-hypergeometric systems form holonomic families over their parameter spaces, as shown by matusevich, miller, and walther [mmw05].",0 +"abstract— we propose a top-down approach for formation +control of heterogeneous multi-agent systems, based on the +method of eigenstructure assignment. given the problem of +achieving scalable formations on the plane, our approach +globally computes a state feedback control that assigns desired closed-loop eigenvalues/eigenvectors. we characterize the +relation between the eigenvalues/eigenvectors and the resulting +inter-agent communication topology, and design special (sparse) +topologies such that the synthesized control may be implemented locally by the individual agents. moreover, we present +a hierarchical synthesis procedure that significantly improves +computational efficiency. finally, we extend the proposed approach to achieve rigid formation and circular motion, and +illustrate these results by simulation examples.",3 +"abstract. we define the notion of limit set intersection property for a collection of subgroups of a hyperbolic group; namely, for a hyperbolic group g and +a collection of subgroups s we say that s satisfies the limit set intersection +property if for all h, k ∈ s we have λ(h) ∩ λ(k) = λ(h ∩ k). given a hyperbolic group admitting a decomposition into a finite graph of hyperbolic groups +structure with qi embedded condition, we show that the set of conjugates of +all the vertex and edge groups satisfy the limit set intersection property.",4 +"abstract +localization performance in wireless networks has been traditionally benchmarked using the cramérrao lower bound (crlb), given a fixed geometry of anchor nodes and a target. however, by endowing +the target and anchor locations with distributions, this paper recasts this traditional, scalar benchmark +as a random variable. the goal of this work is to derive an analytical expression for the distribution of +this now random crlb, in the context of time-of-arrival-based positioning. +to derive this distribution, this work first analyzes how the crlb is affected by the order statistics +of the angles between consecutive participating anchors (i.e., internodal angles). this analysis reveals +an intimate connection between the second largest internodal angle and the crlb, which leads to +an accurate approximation of the crlb. using this approximation, a closed-form expression for the +distribution of the crlb, conditioned on the number of participating anchors, is obtained. +next, this conditioning is eliminated to derive an analytical expression for the marginal crlb +distribution. since this marginal distribution accounts for all target and anchor positions, across all +numbers of participating anchors, it therefore statistically characterizes localization error throughout an +entire wireless network. this paper concludes with a comprehensive analysis of this new network-widecrlb paradigm.",7 +"abstract. +load balancing is a well-studied problem, with balls-in-bins being the primary framework. the +greedy algorithm greedy[d] of azar et al. places each ball by probing d > 1 random bins and placing +the ball in the least loaded of them. with high probability, the maximum load under greedy[d] +is exponentially lower than the result when balls are placed uniformly randomly. vöcking showed +that a slightly asymmetric variant, left[d], provides a further significant improvement. however, this +improvement comes at an additional computational cost of imposing structure on the bins. +here, we present a fully decentralized and easy-to-implement algorithm called firstdiff[d] that +combines the simplicity of greedy[d] and the improved balance of left[d]. the key idea in firstdiff[d] +is to probe until a different bin size from the first observation is located, then place the ball. although +the number of probes could be quite large for some of the balls, we show that firstdiff[d] requires +only at most d probes on average per ball (in both the standard and the heavily-loaded settings). +thus the number of probes is no greater than either that of greedy[d] or left[d]. more importantly, +we show that firstdiff[d] closely matches the improved maximum load ensured by left[d] in both the +standard and heavily-loaded settings. we further provide a tight lower bound on the maximum load +up to o(log log log n) terms. we additionally give experimental data that firstdiff[d] is indeed as +good as left[d], if not better, in practice. +key words. +allocation",8 +"abstract +rules, computing and visualization in science, 1997(1[1]): 41–52 +[82] tetgen: a quality tetrahedral mesh generator and a 3d delaunay triangulator, +http://tetgen.berlios.de/ +[83] w. f. mitchell, hamiltonian paths through two- and three-dimensional grids, journal of +research of the nist, 2005(110): 127–136 +[84] w. f. mitchell, the refinement-tree partition for parallel solution of partial differential +equations, journal of research of the nist, 1998(103): 405–414 +[85] g. heber, r. biswas and g. r. gao, self-avoiding walks over two-dimensional adaptive unstructured meshes, nas techinical report, nas-98-007, nasa ames research +center, 1998 +[86] hans sagan, space-filling curves, springer-veriag, new york, 1994 +[87] l. velho and j gomes de miranda, digital halftoning with space-filling curves, computer +graphics, 1991(25): 81–90 +[88] g. m. morton, a computer oriented geodetic database and a new technique in file +sequencing, technical report, ottawa, canada, 1966 +[89] guohua jin and john mellor-crummey, using space-filling curves for computation +reordering, in proceedings of the los alamos computer science institute sixth annual +symposium, 2005 +[90] c. j. alpert and a. b. kahng, multi-way partitioning via spacefilling curves and dynamic +programming, in proceedings of the 31st annual conference on design automation +conference, 1994: 652–657 +[91] d. abel and d. mark, a comparative analysis of some two-dimensional orderings, international j. of geographical information and systems, 1990(4[1]): 21–31 +[92] j. bartholdi iii and p. goldsman, vertex-labeling algorithms for the hilbert spacefilling +curve, software: practice and experience, 2001(31): 395–408 +[93] c. böhm, s. berchtold and d. a. keim, seaching in hign-dimensional spaces: index +structures for improving the performance of multimedia databases, acm computing +surveys, 2001(33): 322–373 +[94] a. r. butz, space filling curves and mathematical programming, information and control, 1968(12): 314–330",5 +"abstract +this paper focuses on effectivity aspects of the lüroth’s theorem in differential +fields. let f be an ordinary differential field of characteristic 0 and f hui be the field +of differential rational functions generated by a single indeterminate u. let be given +non constant rational functions v1 , . . . , vn ∈ f hui generating a differential subfield +g ⊆ f hui. the differential lüroth’s theorem proved by ritt in 1932 states that there +exists v ∈ g such that g = f hvi. here we prove that the total order and degree of a +generator v are bounded by minj ord(vj ) and (nd(e + 1) + 1)2e+1 , respectively, where +e := maxj ord(vj ) and d := maxj deg(vj ). as a byproduct, our techniques enable us +to compute a lüroth generator by dealing with a polynomial ideal in a polynomial +ring in finitely many variables.",0 +"abstract. the concept of a c-approximable group, for a class of finite +groups c, is a common generalization of the concepts of a sofic, weakly +sofic, and linear sofic group. +glebsky raised the question whether all groups are approximable +by finite solvable groups with arbitrary invariant length function. we +answer this question by showing that any non-trivial finitely generated +perfect group does not have this property, generalizing a counterexample +of howie. on a related note, we prove that any non-trivial group which +can be approximated by finite groups has a non-trivial quotient that can +be approximated by finite special linear groups. +moreover, we discuss the question which connected lie groups can +be embedded into a metric ultraproduct of finite groups with invariant +length function. we prove that these are precisely the abelian ones, +providing a negative answer to a question of doucha. referring to a +problem of zilber, we show that a the identity component of a lie group, +whose topology is generated by an invariant length function and which +is an abstract quotient of a product of finite groups, has to be abelian. +both of these last two facts give an alternative proof of a result of +turing. finally, we solve a conjecture of pillay by proving that the +identity component of a compactification of a pseudofinite group must +be abelian as well. +all results of this article are applications of theorems on generators +and commutators in finite groups by the first author and segal. in section 4 we also use results of liebeck and shalev on bounded generation +in finite simple groups.",4 +"abstract +this paper addresses the question of emotion classification. the +task consists in predicting emotion labels (taken among a set of +possible labels) best describing the emotions contained in short +video clips. building on a standard framework – lying in describing +videos by audio and visual features used by a supervised classifier +to infer the labels – this paper investigates several novel directions. +first of all, improved face descriptors based on 2d and 3d convolutional neural networks are proposed. second, the paper explores +several fusion methods, temporal and multimodal, including a novel +hierarchical method combining features and scores. in addition, we +carefully reviewed the different stages of the pipeline and designed +a cnn architecture adapted to the task; this is important as the size +of the training set is small compared to the difficulty of the problem, +making generalization difficult. the so-obtained model ranked 4th +at the 2017 emotion in the wild challenge with the accuracy of +58.8 %.",1 +"abstract—the randles circuit (including a parallel +resistor and capacitor in series with another resistor) +and its generalised topology have widely been employed +in electrochemical energy storage systems such as batteries, fuel cells and supercapacitors, also in biomedical +engineering, for example, to model the electrode-tissue +interface in electroencephalography and baroreceptor +dynamics. this paper studies identifiability of generalised randles circuit models, that is, whether the +model parameters can be estimated uniquely from the +input-output data. it is shown that generalised randles circuit models are structurally locally identifiable. +the condition that makes the model structure globally +identifiable is then discussed. finally, the estimation +accuracy is evaluated through extensive simulations. +index terms—randles circuit, identifiability, system +identification, parameter estimation.",3 +"abstract. we prove that iterated toric fibre products from a finite collection of +toric varieties are defined by binomials of uniformly bounded degree. this implies +that markov random fields built up from a finite collection of finite graphs have +uniformly bounded markov degree.",0 +"abstract. we propose a simple global computing framework, whose main concern is +code migration. systems are structured in sites, and each site is divided into two parts: a +computing body, and a membrane which regulates the interactions between the computing +body and the external environment. more precisely, membranes are filters which control +access to the associated site, and they also rely on the well-established notion of trust +between sites. we develop a basic theory to express and enforce security policies via +membranes. initially, these only control the actions incoming agents intend to perform +locally. we then adapt the basic theory to encompass more sophisticated policies, where +the number of actions an agent wants to perform, and also their order, are considered.",6 +"abstract—objective: to present the first real-time a posteriori error-driven adaptive finite element approach for realtime simulation and to demonstrate the method on a needle +insertion problem. methods: we use corotational elasticity and a +frictional needle/tissue interaction model. the problem is solved +using finite elements within sofa1 . the refinement strategy +relies upon a hexahedron-based finite element method, combined +with a posteriori error estimation driven local h-refinement, for +simulating soft tissue deformation. results: we control the local +and global error level in the mechanical fields (e.g. displacement +or stresses) during the simulation. we show the convergence +of the algorithm on academic examples, and demonstrate its +practical usability on a percutaneous procedure involving needle +insertion in a liver. for the latter case, we compare the force +displacement curves obtained from the proposed adaptive algorithm with that obtained from a uniform refinement approach. +conclusions: error control guarantees that a tolerable error level +is not exceeded during the simulations. local mesh refinement +accelerates simulations. significance: our work provides a first +step to discriminate between discretization error and modeling +error by providing a robust quantification of discretization error +during simulations. +index terms—finite element method, real-time error estimate, +adaptive refinement, constraint-based interaction.",5 +"abstract +the task of a neural associative memory is to retrieve a set of previously +memorized patterns from their noisy versions using a network of neurons. +an ideal network should have the ability to 1) learn a set of patterns as they +arrive, 2) retrieve the correct patterns from noisy queries, and 3) maximize +the pattern retrieval capacity while maintaining the reliability in responding +to queries. the majority of work on neural associative memories has focused +on designing networks capable of memorizing any set of randomly chosen +patterns at the expense of limiting the retrieval capacity. +in this paper, we show that if we target memorizing only those patterns +that have inherent redundancy (i.e., belong to a subspace), we can obtain all +the aforementioned properties. this is in sharp contrast with the previous +work that could only improve one or two aspects at the expense of the third. +more specifically, we propose framework based on a convolutional neural",9 +"abstract—in this paper, we propose a new approach of +network performance analysis, which is based on our previous +works on the deterministic network analysis using the gaussian approximation (dna-ga). first, we extend our previous +works to a signal-to-interference ratio (sir) analysis, which +makes our dna-ga analysis a formal microscopic analysis tool. +second, we show two approaches for upgrading the dna-ga +analysis to a macroscopic analysis tool. finally, we perform a +comparison between the proposed dna-ga analysis and the +existing macroscopic analysis based on stochastic geometry. our +results show that the dna-ga analysis possesses a few special +features: (i) shadow fading is naturally considered in the dnaga analysis; (ii) the dna-ga analysis can handle non-uniform +user distributions and any type of multi-path fading; (iii) the +shape and/or the size of cell coverage areas in the dna-ga +analysis can be made arbitrary for the treatment of hotspot +network scenarios. thus, dna-ga analysis is very useful for +the network performance analysis of the 5th generation (5g) +systems with general cell deployment and user distribution, both +on a microscopic level and on a macroscopic level. 1",7 +"abstract—one of the methods for stratifying different molecular classes of breast cancer is the nottingham prognostic index plus +(npi+) which uses breast cancer relevant biomarkers to stain tumour tissues prepared on tissue microarray (tma). to determine the +molecular class of the tumour, pathologists will have to manually mark the nuclei activity biomarkers through a microscope and use a +semi-quantitative assessment method to assign a histochemical score (h-score) to each tma core. however, manually marking +positively stained nuclei is a time consuming, imprecise and subjective process which will lead to inter-observer and intra-observer +discrepancies. in this paper, we present an end-to-end deep learning system which directly predicts the h-score automatically. the +innovative characteristics of our method is that it is inspired by the h-scoring process of the pathologists where they count the total +number of cells, the number of tumour cells, and categorise the cells based on the intensity of their positive stains. our system imitates +the pathologists’ decision process and uses one fully convolutional network (fcn) to extract all nuclei region (tumour and non-tumour), +a second fcn to extract tumour nuclei region, and a multi-column convolutional neural network which takes the outputs of the first two +fcns and the stain intensity description image as input and acts as the high-level decision making mechanism to directly output the +h-score of the input tma image. in additional to developing the deep learning framework, we also present methods for constructing +positive stain intensity description image and for handling discrete scores with numerical gaps. whilst deep learning has been widely +applied in digital pathology image analysis, to the best of our knowledge, this is the first end-to-end system that takes a tma image as +input and directly outputs a clinical score. we will present experimental results which demonstrate that the h-scores predicted by our +model have very high and statistically significant correlation with experienced pathologists’ scores and that the h-scoring discrepancy +between our algorithm and the pathologits is on par with that between the pathologists. although it is still a long way from clinical use, +this work demonstrates the possibility of using deep learning techniques to automatically and directly predicting the clinical scores of +digital pathology images. +index terms—h-score, immunohistochemistry, diaminobenzidine, convolutional neural network, breast cancer",1 +"abstract +in this paper, we propose a new class of lattices constructed from polar codes, namely polar lattices, to achieve the +capacity",7 +"abstract. ashtiani et al. (nips 2016) introduced a semi-supervised +framework for clustering (ssac) where a learner is allowed to make samecluster queries. more specifically, in their model, there is a query oracle +that answers queries of the form “given any two vertices, do they belong +to the same optimal cluster?”. in many clustering contexts, this kind of +oracle queries are feasible. ashtiani et al. showed the usefulness of such a +query framework by giving a polynomial time algorithm for the k-means +clustering problem where the input dataset satisfies some separation +condition. ailon et al. extended the above work to the approximation +setting by giving an efficient (1+ε)-approximation algorithm for k-means +for any small ε > 0 and any dataset within the ssac framework. in +this work, we extend this line of study to the correlation clustering +problem. correlation clustering is a graph clustering problem where +pairwise similarity (or dissimilarity) information is given for every pair +of vertices and the objective is to partition the vertices into clusters that +minimise the disagreement (or maximises agreement) with the pairwise +information given as input. these problems are popularly known as +mindisagree and maxagree problems, and mindisagree[k] and maxagree[k] +are versions of these problems where the number of optimal clusters is at +most k. there exist polynomial time approximation schemes (ptas) +for mindisagree[k] and maxagree[k] where the approximation guarantee +is (1 + ε) for any small ε and the running time is polynomial in the input +parameters but exponential in k and 1/ε. we get a significant running +time improvement within the ssac framework at the cost of making a +small number of same-cluster queries. we obtain an (1+ε)-approximation +algorithm for any small ε with running time that is polynomial in the +input parameters and also in k and 1/ε. we also give non-trivial upper +and lower bounds on the number of same-cluster queries, the lower bound +being based on the exponential time hypothesis (eth). note that the +existence of an efficient algorithm for mindisagree[k] in the ssac setting +exhibits the power of same-cluster queries since such polynomial time +algorithm (polynomial even in k and 1/ε) is not possible in the classical +⋆ +⋆⋆ +⋆⋆⋆ +† +‡",8 +"abstract +demands on the disaster response capacity of the european union are likely +to increase, as the impacts of disasters continue to grow both in size and +frequency. this has resulted in intensive research on issues concerning +spatially-explicit information and modelling and their multiple sources of +uncertainty. geospatial support is one of the forms of assistance frequently +required by emergency response centres along with hazard forecast and event +management assessment. robust modelling of natural hazards requires dynamic simulations under an array of multiple inputs from different sources. +uncertainty is associated with meteorological forecast and calibration of the +model parameters. software uncertainty also derives from the data transformation models (d-tm) needed for predicting hazard behaviour and its +consequences. on the other hand, social contributions have recently been +recognized as valuable in raw-data collection and mapping efforts traditionally dominated by professional organizations. here an architecture overview +is proposed for adaptive and robust modelling of natural hazards, following +the semantic array programming paradigm to also include the distributed +array of social contributors called citizen sensor in a semantically-enhanced +strategy for d-tm modelling. the modelling architecture proposes a multicriteria approach for assessing the array of potential impacts with qualitative rapid assessment methods based on a partial open loop feedback +control (polfc) schema and complementing more traditional and accurate a-posteriori assessment. we discuss the computational aspect of environmental risk modelling using array-based parallel paradigms on high +performance computing (hpc) platforms, in order for the implications of +urgency to be introduced into the systems (urgent-hpc). +keywords: geospatial, integrated natural resources modelling and management, +semantic array programming, warning system, remote sensing, parallel application, high performance computing, partial open loop feedback control +1",5 +"abstract +the use of programming languages such as java and c in open +source software (oss) has been well studied. however, many +other popular languages such as xsl or xml have received minor +attention. in this paper, we discuss some trends in oss +development that we observed when considering multiple +programming language evolution of oss. based on the revision +data of 22 oss projects, we tracked the evolution of language usage +and other artefacts such as documentation files, binaries and +graphics files. in these systems several different languages and +artefact types including c/c++, java, xml, xsl, makefile, +groovy, html, shell scripts, css, graphics files, javascript, jsp, +ruby, phyton, xquery, opendocument files, php, etc. have been +used. we found that the amount of code written in different +languages differs substantially. some of our findings can be +summarized as follows: (1) javascript and css files most often coevolve with xsl; (2) most java developers but only every second +c/c++ developer work with xml; (3) and more generally, we +observed a significant increase of usage of xml and xsl during +recent years and found that java or c are hardly ever the only +language used by a developer. in fact, a developer works with more +than 5 different artefact types (or 4 different languages) in a project +on average.",6 +"abstract—augmented reality is an emerging technology in +many application domains. among them is the beauty industry, +where live virtual try-on of beauty products is of great +importance. in this paper, we address the problem of live +hair color augmentation. to achieve this goal, hair needs to be +segmented quickly and accurately. we show how a modified +mobilenet cnn architecture can be used to segment the hair in +real-time. instead of training this network using large amounts +of accurate segmentation data, which is difficult to obtain, we +use crowd sourced hair segmentation data. while such data +is much simpler to obtain, the segmentations there are noisy +and coarse. despite this, we show how our system can produce +accurate and fine-detailed hair mattes, while running at over +30 fps on an ipad pro tablet. +keywords-hair segmentation; matting;augmented reality; +deep learning; neural networks",1 +"abstract +ensemble discriminative tracking utilizes a committee of +classifiers, to label data samples, which are in turn, used for +retraining the tracker to localize the target using the collective knowledge of the committee. committee members could +vary in their features, memory update schemes, or training +data, however, it is inevitable to have committee members +that excessively agree because of large overlaps in their +version space. to remove this redundancy and have an effective ensemble learning, it is critical for the committee to +include consistent hypotheses that differ from one-another, +covering the version space with minimum overlaps. in this +study, we propose an online ensemble tracker that directly +generates a diverse committee by generating an efficient set +of artificial training. the artificial data is sampled from the +empirical distribution of the samples taken from both target and background, whereas the process is governed by +query-by-committee to shrink the overlap between classifiers. the experimental results demonstrate that the proposed scheme outperforms conventional ensemble trackers +on public benchmarks.",1 +"abstract +we study the well-known problem of estimating a sparse n-dimensional +unknown mean vector θ = (θ1 , ..., θn ) with entries corrupted by gaussian white noise. in the bayesian framework, continuous shrinkage +priors which can be expressed as scale-mixture normal densities are +popular for obtaining sparse estimates of θ. in this article, we introduce a new fully bayesian scale-mixture prior known as the inverse +gamma-gamma (igg) prior. we prove that the posterior distribution +contracts around the true θ at (near) minimax rate under very mild +conditions. in the process, we prove that the sufficient conditions for +minimax posterior contraction given by van der pas et al. [25] are not +necessary for optimal posterior contraction. we further show that the +igg posterior density concentrates at a rate faster than those of the +horseshoe or the horseshoe+ in the kullback-leibler (k-l) sense. to +classify true signals (θi 6= 0), we also propose a hypothesis test based +on thresholding the posterior mean. taking the loss function to be the +expected number of misclassified tests, we show that our test procedure asymptotically attains the optimal bayes risk exactly. we illustrate through simulations and data analysis that the igg has excellent +finite sample performance for both estimation and classification. +∗ +keywords and phrases: normal means problem, sparsity, nearly black vectors, posterior contraction, multiple hypothesis testing, heavy tail, shrinkage estimation +† +malay ghosh (email: ghoshm@ufl.edu) is distinguished professor, department of +statistics, university of florida. ray bai (email: raybai07@ufl.edu) is graduate student, +department of statistics, university of florida.",10 +abstract,1 +"abstract. all solutions sat (allsat for short) is a variant of +propositional satisfiability problem. despite its significance, allsat has been relatively unexplored compared to other variants. +we thus survey and discuss major techniques of allsat solvers. +we faithfully implement them and conduct comprehensive experiments using a large number of instances and various types of solvers +including one of the few public softwares. the experiments reveal +solver’s characteristics. our implemented solvers are made publicly +available so that other researchers can easily develop their solver +by modifying our codes and compare it with existing methods.",8 +"abstract +in the euclidean tsp with neighborhoods (tspn), we are given a collection of n regions +(neighborhoods) and we seek a shortest tour that visits each region. as a generalization of the +classical euclidean tsp, tspn is also np-hard. in this paper, we present new approximation +results for the tspn, including (1) a constant-factor approximation algorithm for the case of +arbitrary connected neighborhoods having comparable diameters; and (2) a ptas for the important special case of disjoint unit disk neighborhoods (or nearly disjoint, nearly-unit disks). +our methods also yield improved approximation ratios for various special classes of neighborhoods, which have previously been studied. further, we give a linear-time o(1)-approximation +algorithm for the case of neighborhoods that are (infinite) straight lines.",8 +"abstract +the growing importance of massive datasets with +the advent of deep learning makes robustness to +label noise a critical property for classifiers to +have. sources of label noise include automatic labeling for large datasets, non-expert labeling, and +label corruption by data poisoning adversaries. in +the latter case, corruptions may be arbitrarily bad, +even so bad that a classifier predicts the wrong labels with high confidence. to protect against such +sources of noise, we leverage the fact that a small +set of clean labels is often easy to procure. we +demonstrate that robustness to label noise up to +severe strengths can be achieved by using a set of +trusted data with clean labels, and propose a loss +correction that utilizes trusted examples in a dataefficient manner to mitigate the effects of label +noise on deep neural network classifiers. across +vision and natural language processing tasks, we +experiment with various label noises at several +strengths, and show that our method significantly +outperforms existing methods.",9 +"abstract. we reduce the openness conjecture of demailly and kollár on +the singularities of plurisubharmonic functions to a purely algebraic statement.",0 +"abstract +we present a general method to compute a presentation for any cusped hyperbolic lattice γ, applying a +classical result of macbeath to a suitable γ-invariant horoball cover of the corresponding symmetric space. +as applications we compute presentations for the picard modular groups pu(2, 1, od ) for d = 1, 3, 7 and the +quaternionic lattice pu(2, 1, h) with entries in the hurwitz integer ring h.",4 +"abstract—information in neural networks is represented as +weighted connections, or synapses, between neurons. this poses +a problem as the primary computational bottleneck for neural +networks is the vector-matrix multiply when inputs are multiplied +by the neural network weights. conventional processing architectures are not well suited for simulating neural networks, often requiring large amounts of energy and time. additionally, synapses +in biological neural networks are not binary connections, but +exhibit a nonlinear response function as neurotransmitters are +emitted and diffuse between neurons. inspired by neuroscience +principles, we present a digital neuromorphic architecture, the +spiking temporal processing unit (stpu), capable of modeling +arbitrary complex synaptic response functions without requiring +additional hardware components. we consider the paradigm of +spiking neurons with temporally coded information as opposed to +non-spiking rate coded neurons used in most neural networks. in +this paradigm we examine liquid state machines applied to speech +recognition and show how a liquid state machine with temporal +dynamics maps onto the stpu—demonstrating the flexibility and +efficiency of the stpu for instantiating neural algorithms.",9 +"abstract— the most commonly used weighted least square state +estimator in power industry is nonlinear and formulated by using +conventional measurements such as line flow and injection +measurements. pmus (phasor measurement units) are gradually +adding them to improve the state estimation process. in this +paper the way of corporation the pmu data to the conventional +measurements and a linear formulation of the state estimation +using only pmu measured data are investigated. six cases are +tested while gradually increasing the number of pmus which are +added to the measurement set and the effect of pmus on the +accuracy of variables are illustrated and compared by applying +them on ieee 14, 30 test systems. +keywords-conventional state estimation; hybrid +estimation; linear formulation; phasor measurement unit",3 +"abstract. when does a noetherian commutative ring r have uniform symbolic topologies on +primes–read, when does there exist an integer d > 0 such that the symbolic power p (dr) ⊆ p r for +all prime ideals p ⊆ r and all r > 0? groundbreaking work of ein-lazarsfeld-smith, as extended +by hochster and huneke, and by ma and schwede in turn, provides a beautiful answer in the setting +of finite-dimensional excellent regular rings. it is natural to then sleuth for analogues where the ring +r is non-regular, or where the above ideal containments can be improved using a linear function +whose growth rate is slower. this manuscript falls under the overlap of these research directions. +working with a prescribed type of prime ideal q inside of tensor products of domains of finite type +over an algebraically closed field f, we present binomial- and multinomial expansion criteria for +containments of type q(er) ⊆ qr , or even better, of type q(e(r−1)+1) ⊆ qr for all r > 0. the final +section consolidates remarks on how often we can utilize these criteria, presenting an example.",0 +"abstract +we present a quasi-analytic perturbation expansion for multivariate n dimensional gaussian integrals. the perturbation expansion is an infinite series +of lower-dimensional integrals (one-dimensional in the simplest approximation). +this perturbative idea can also be applied to multivariate student-t integrals. we +evaluate the perturbation expansion explicitly through 2nd order, and discuss the +convergence, including enhancement using padé approximants. brief comments +on potential applications in finance are given, including options, models for +credit risk and derivatives, and correlation sensitivities.",5 +"abstract +an important class of physical systems that are of interest in practice are inputoutput open quantum systems that can be described by quantum stochastic differential equations and defined on an infinite-dimensional underlying hilbert space. +most commonly, these systems involve coupling to a quantum harmonic oscillator +as a system component. this paper is concerned with error bounds in the finitedimensional approximations of input-output open quantum systems defined on an +infinite-dimensional hilbert space. we develop a framework for developing error +bounds between the time evolution of the state of a class of infinite-dimensional +quantum systems and its approximation on a finite-dimensional subspace of the original, when both are initialized in the latter subspace. this framework is then applied +to two approaches for obtaining finite-dimensional approximations: subspace truncation and adiabatic elimination. applications of the bounds to some physical examples +drawn from the literature are provided to illustrate our results.",3 +"abstract—hardware impairments, such as phase noise, quantization errors, non-linearities, and noise amplification, have +baneful effects on wireless communications. in this paper, we +investigate the effect of hardware impairments on multipair +massive multiple-input multiple-output (mimo) two-way fullduplex relay systems with amplify-and-forward scheme. more +specifically, novel closed-form approximate expressions for the +spectral efficiency are derived to obtain some important insights +into the practical design of the considered system. when the +number of relay antennas n increases without bound, we propose +a hardware scaling law, which reveals that the level of hardware +impairments that can be tolerated is roughly proportional to +√ +n . this new result inspires us to design low-cost and practical +multipair massive mimo two-way full-duplex relay systems. +moreover, the optimal number of relay antennas is derived to +maximize the energy efficiency. finally, motor-carlo simulation +results are provided to validate our analytical results.",7 +"abstract. we introduce a new native code compiler for curry codenamed +sprite. sprite is based on the fair scheme, a compilation strategy that provides +instructions for transforming declarative, non-deterministic programs of a certain +class into imperative, deterministic code. we outline salient features of sprite, +discuss its implementation of curry programs, and present benchmarking results. sprite is the first-to-date operationally complete implementation of curry. +preliminary results show that ensuring this property does not incur a significant +penalty.",6 +"abstract. several well-known open questions (such as: are all +groups sofic/hyperlinear?) have a common form: can all groups be +approximated by asymptotic homomorphisms into the symmetric +groups sym(n) (in the sofic case) or the finite dimensional unitary groups u(n) (in the hyperlinear case)? in the case of u(n), +the question can be asked with respect to different metrics and +norms. this paper answers, for the first time, one of these versions, showing that there exist fintely presented groups which are +not approximated +by u(n) with respect to the frobenius norm +√ n +∥t ∥frob = ∑i,j=1 ∣tij ∣2 , t = [tij ]ni,j=1 ∈ mn (c). our strategy is +to show that some higher dimensional cohomology vanishing phenomena implies stability, that is, every frobenius-approximate homomorphism into finite-dimensional unitary groups is close to an +actual homomorphism. this is combined with existence results of +certain non-residually finite central extensions of lattices in some +simple p-adic lie groups. these groups act on high rank bruhattits buildings and satisfy the needed vanishing cohomology phenomenon and are thus stable and not frobenius-approximated.",4 +"abstract—graphs form a natural model for relationships and +interactions between entities, for example, between people in +social and cooperation networks, servers in computer networks, +or tags and words in documents and tweets. but, which of these +relationships or interactions are the most lasting ones? in this +paper, we study the following problem: given a set of graph +snapshots, which may correspond to the state of an evolving +graph at different time instances, identify the set of nodes that +are the most densely connected in all snapshots. we call this +problem the best friends for ever (b ff) problem. we provide +definitions for density over multiple graph snapshots, that capture +different semantics of connectedness over time, and we study the +corresponding variants of the b ff problem. we then look at the +on-off b ff (o2 b ff) problem that relaxes the requirement of +nodes being connected in all snapshots, and asks for the densest +set of nodes in at least k of a given set of graph snapshots. +we show that this problem is np-complete for all definitions of +density, and we propose a set of efficient algorithms. finally, we +present experiments with synthetic and real datasets that show +both the efficiency of our algorithms and the usefulness of the +b ff and the o2 b ff problems.",8 +"abstract. we introduce quasi-homomorphisms of cluster algebras, a flexible notion +of a map between cluster algebras of the same type (but with different coefficients). +the definition is given in terms of seed orbits, the smallest equivalence classes of seeds +on which the mutation rules for non-normalized seeds are unambiguous. we present +examples of quasi-homomorphisms involving familiar cluster algebras, such as cluster +structures on grassmannians, and those associated with marked surfaces with boundary. +we explore the related notion of a quasi-automorphism, and compare the resulting group +with other groups of symmetries of cluster structures. for cluster algebras from surfaces, +we determine the subgroup of quasi-automorphisms inside the tagged mapping class +group of the surface.",0 +"abstract. we compute the zeta functions enumerating graded ideals in the graded +lie rings associated with the free d-generator lie rings fc,d of nilpotency class c for all +c ď 2 and for pc, dq p tp3, 3q, p3, 2q, p4, 2qu. we apply our computations to obtain information about p-adic, reduced, and topological zeta functions, in particular pertaining +to their degrees and some special values.",4 +"abstract +we consider the problem of designing and analyzing differentially private algorithms that +can be implemented on discrete models of computation in strict polynomial time, motivated +by known attacks on floating point implementations of real-arithmetic differentially private +algorithms (mironov, ccs 2012) and the potential for timing attacks on expected polynomialtime algorithms. we use a case study the basic problem of approximating the histogram of +a categorical dataset over a possibly large data universe x . the classic laplace mechanism +(dwork, mcsherry, nissim, smith, tcc 2006 and j. privacy & confidentiality 2017) does not +satisfy our requirements, as it is based on real arithmetic, and natural discrete analogues, such +as the geometric mechanism (ghosh, roughgarden, sundarajan, stoc 2009 and sicomp +2012), take time at least linear in |x |, which can be exponential in the bit length of the input. +in this paper, we provide strict polynomial-time discrete algorithms for approximate histograms whose simultaneous accuracy (the maximum error over all bins) matches that of the +laplace mechanism up to constant factors, while retaining the same (pure) differential privacy +guarantee. one of our algorithms produces a sparse histogram as output. its “per-bin accuracy” (the error on individual bins) is worse than that of the laplace mechanism by a factor +of log |x |, but we prove a lower bound showing that this is necessary for any algorithm that +produces a sparse histogram. a second algorithm avoids this lower bound, and matches the +per-bin accuracy of the laplace mechanism, by producing a compact and efficiently computable +representation of a dense histogram; it is based on an (n + 1)-wise independent implementation +of an appropriately clamped version of the discrete geometric mechanism.",8 +"abstract +building automation systems (bas) are exemplars of cyber-physical +systems (cps), incorporating digital control architectures over underlying +continuous physical processes. we provide a modular model library for +bas drawn from expertise developed on a real bas setup. the library +allows to build models comprising of either physical quantities or digital +control modules.the structure, operation, and dynamics of the model +can be complex, incorporating (i) stochasticity, (ii) non-linearities, (iii) +numerous continuous variables or discrete states, (iv) various input and +output signals, and (v) a large number of possible discrete configurations. +the modular composition of bas components can generate useful cps +benchmarks. we display this use by means of three realistic case studies, +where corresponding models are built and engaged with different analysis +goals. the benchmarks, the model library and data collected from the +bas setup at the university of oxford, are kept on-line at https:// +github.com/natchi92/basbenchmarks. +keywords 1 cyber-physical systems, building automation systems, thermal modelling, hybrid models, simulation, reachability analysis, probabilistic safety, control synthesis",3 +"abstract +we explicitly compute the exit law of a certain hypoelliptic brownian motion on a solvable lie group. the underlying random variable +can be seen as a multidimensional exponential functional of brownian +motion. as a consequence, we obtain hidden identities in law between +gamma random variables as the probabilistic manifestation of braid +relations. the classical beta-gamma algebra identity corresponds to +the only braid move in a root system of type a2 . the other ones seem +new. +a key ingredient is a conditional representation theorem. it relates +our hypoelliptic brownian motion conditioned on exiting at a fixed +point to a certain deterministic transform of brownian motion. +the identities in law between gamma variables tropicalize to identities between exponential random variables. these are continuous +versions of identities between geometric random variables related to +changes of parametrizations in lusztig’s canonical basis. hence, we +see that the exit law of our hypoelliptic brownian motion is the geometric analogue of a simple natural measure on lusztig’s canonical +basis.",4 +"abstract. we present guarded dependent type theory, gdtt, an extensional dependent type theory with a ‘later’ modality and clock quantifiers +for programming and proving with guarded recursive and coinductive +types. the later modality is used to ensure the productivity of recursive +definitions in a modular, type based, way. clock quantifiers are used for +controlled elimination of the later modality and for encoding coinductive +types using guarded recursive types. key to the development of gdtt +are novel type and term formers involving what we call ‘delayed substitutions’. these generalise the applicative functor rules for the later +modality considered in earlier work, and are crucial for programming +and proving with dependent types. we show soundness of the type theory with respect to a denotational model. +this is the technical report version of a paper to appear in the proceedings +of fossacs 2016.",6 +abstract,6 +"abstract +this paper investigates a joint source-channel secrecy problem for the shannon cipher broadcast system. we +suppose list secrecy is applied, i.e., a wiretapper is allowed to produce a list of reconstruction sequences and the +secrecy is measured by the minimum distortion over the entire list. for discrete communication cases, we propose a +permutation-based uncoded scheme, which cascades a random permutation with a symbol-by-symbol mapping. using +this scheme, we derive an inner bound for the admissible region of secret key rate, list rate, wiretapper distortion, +and distortions of legitimate users. for the converse part, we easily obtain an outer bound for the admissible region +from an existing result. comparing the outer bound with the inner bound shows that the proposed scheme is optimal +under certain conditions. besides, we extend the proposed scheme to the scalar and vector gaussian communication +scenarios, and characterize the corresponding performance as well. for these two cases, we also propose another +uncoded scheme, orthogonal-transform-based scheme, which achieves the same performance as the permutationbased scheme. interestingly, by introducing the random permutation or the random orthogonal transform into the +traditional uncoded scheme, the proposed uncoded schemes, on one hand, provide a certain level of secrecy, and on +the other hand, do not lose any performance in terms of the distortions for legitimate users. +index terms +uncoded scheme, secrecy, permutation, orthogonal transform, shannon cipher system.",7 +"abstract +we study the sample complexity of learning neural networks, by providing new bounds on their +rademacher complexity assuming norm constraints on the parameter matrix of each layer. compared +to previous work, these complexity bounds have improved dependence on the network depth, and under +some additional assumptions, are fully independent of the network size (both depth and width). these +results are derived using some novel techniques, which may be of independent interest.",9 +abstract,2 +"abstract +the semantics of concurrent data structures is usually given by a sequential specification and a +consistency condition. linearizability is the most popular consistency condition due to its simplicity and general applicability. nevertheless, for applications that do not require all guarantees +offered by linearizability, recent research has focused on improving performance and scalability +of concurrent data structures by relaxing their semantics. +in this paper, we present local linearizability, a relaxed consistency condition that is applicable +to container-type concurrent data structures like pools, queues, and stacks. while linearizability +requires that the effect of each operation is observed by all threads at the same time, local +linearizability only requires that for each thread t, the effects of its local insertion operations and +the effects of those removal operations that remove values inserted by t are observed by all threads +at the same time. we investigate theoretical and practical properties of local linearizability and +its relationship to many existing consistency conditions. we present a generic implementation +method for locally linearizable data structures that uses existing linearizable data structures as +building blocks. our implementations show performance and scalability improvements over the +original building blocks and outperform the fastest existing container-type implementations. +1998 acm subject classification d.3.1 [programming languages]: formal definitions and +theory—semantics; e.1 [data structures]: lists, stacks, and queues; d.1.3 [software]: programming techniques—concurrent programming +keywords and phrases (concurrent) data structures, relaxed semantics, linearizability",6 +"abstract. it is widely accepted that the immune system undergoes age-related +changes correlating with increased disease in the elderly. t cell subsets have been +implicated. the aim of this work is firstly to implement and validate a simulation +of t regulatory cell (treg) dynamics throughout the lifetime, based on a model by +baltcheva. we show that our initial simulation produces an inversion between +precursor and mature treys at around 20 years of age, though the output differs +significantly from the original laboratory dataset. secondly, this report discusses +development of the model to incorporate new data from a cross-sectional study of +healthy blood donors addressing balance between treys and th17 cells with novel +markers for treg. the potential for simulation to add insight into immune aging is +discussed.",5 +"abstract. this paper is devoted to the analysis of the uniform null controllability for a family of nonlinear reaction-diffusion systems approximating a parabolic-elliptic system which models the electrical +activity of the heart. the uniform, with respect to the degenerating parameter, null controllability of +the approximating system by means of a single control is shown. the proof is based on the combination +of carleman estimates and weighted energy inequalities.",3 +"abstract +computational techniques have shown much promise in the field of finance, owing to their ability to extract sense out of +dauntingly complex systems. this paper reviews the most promising of these techniques, from traditional computational +intelligence methods to their machine learning siblings, with particular view to their application in optimising the +management of a portfolio of financial instruments. the current state of the art is assessed, and prospective further work +is assessed and recommended. +keywords: reinforcement, learning, temporal, difference, neural, network, portfolio optimisation, genetic algorithm, +genetic programming, markowitz portfolio theory, black-scholes, investment theory.",5 +"abstract +the generalization error of deep neural networks via their classification margin is studied in this +work. our approach is based on the jacobian matrix of a deep neural network and can be applied to +networks with arbitrary non-linearities and pooling layers, and to networks with different architectures +such as feed forward networks and residual networks. our analysis leads to the conclusion that a bounded +spectral norm of the network’s jacobian matrix in the neighbourhood of the training samples is crucial for +a deep neural network of arbitrary depth and width to generalize well. this is a significant improvement +over the current bounds in the literature, which imply that the generalization error grows with either the +width or the depth of the network. moreover, it shows that the recently proposed batch normalization +and weight normalization re-parametrizations enjoy good generalization properties, and leads to a novel +network regularizer based on the network’s jacobian matrix. the analysis is supported with experimental +results on the mnist, cifar-10, lared and imagenet datasets.",9 +"abstract +we propose a new way to self-adjust the mutation rate in population-based +evolutionary algorithms in discrete search spaces. roughly speaking, it consists of +creating half the offspring with a mutation rate that is twice the current mutation +rate and the other half with half the current rate. the mutation rate is then updated +to the rate used in that subpopulation which contains the best offspring. +we analyze how the (1+λ) evolutionary algorithm with this self-adjusting mutation rate optimizes the onemax test function. we prove that this dynamic version +of the (1+λ) ea finds the optimum in an expected optimization time (number of fitness evaluations) of o(nλ/log λ + n log n). this time is asymptotically smaller than +the optimization time of the classic (1 + λ) ea. previous work shows that this performance is best-possible among all λ-parallel mutation-based unbiased black-box +algorithms. +this result shows that the new way of adjusting the mutation rate can find +optimal dynamic parameter values on the fly. since our adjustment mechanism is +simpler than the ones previously used for adjusting the mutation rate and does not +have parameters itself, we are optimistic that it will find other applications.",9 +"abstract +√ +we give a lower bound of ω̃( n) for the degree-4 sum-of-squares sdp relaxation for the +planted clique problem. specifically, we show that on an erdös-rényi graph g(n, 12 ), with high +probability there is a feasible +point for the degree-4 sos relaxation of the clique problem with +√ +an objective value of ω̃( n), so that the program cannot +distinguish between a random graph +√ +and a random graph with a planted clique of size õ( n). this bound is tight. +we build on the works of deshpande and montanari and meka et al., who give lower bounds +of ω̃(n1/3 ) and ω̃(n1/4 ) respectively. we improve on their results by making a perturbation to +the sdp solution proposed in their work, then showing that this perturbation remains psd as +the objective value approaches ω̃(n1/2 ). +in an independent work, hopkins, kothari and potechin [hkp15] have obtained a similar +lower bound for the degree-4 sos relaxation.",8 +"abstract +the motivation of this work is the detection of cerebrovascular accidents by +microwave tomographic imaging. this requires the solution of an inverse +problem relying on a minimization algorithm (for example, gradient-based), +where successive iterations consist in repeated solutions of a direct problem. the reconstruction algorithm is extremely computationally intensive +and makes use of efficient parallel algorithms and high-performance computing. the feasibility of this type of imaging is conditioned on one hand +by an accurate reconstruction of the material properties of the propagation +medium and on the other hand by a considerable reduction in simulation +time. fulfilling these two requirements will enable a very rapid and accurate +diagnosis. from the mathematical and numerical point of view, this means +solving maxwell’s equations in time-harmonic regime by appropriate domain +decomposition methods, which are naturally adapted to parallel architectures. +keywords: inverse problem, scalable preconditioners, maxwell’s equations, +microwave imaging +preprint submitted to parallel computing",5 +"abstract data stream mining problem has caused widely concerns in the area of machine learning +and data mining. in some recent studies, ensemble classification has been widely used in concept drift +detection, however, most of them regard classification accuracy as a criterion for judging whether +concept drift happening or not. information entropy is an important and effective method for +measuring uncertainty. based on the information entropy theory, a new algorithm using information +entropy to evaluate a classification result is developed. it uses ensemble classification techniques, +and the weight of each classifier is decided through the entropy of the result produced by an +ensemble classifiers system. when the concept in data streams changing, the classifiers’ weight below +a threshold value will be abandoned to adapt to a new concept in one time. in the experimental +analysis section, six databases and four proposed algorithms are executed. the results show that the +proposed method can not only handle concept drift effectively, but also have a better classification +accuracy and time performance than the contrastive algorithms. +j. wang +1. school of computer and information technology, shanxi university, taiyuan, china +2. key laboratory of computational intelligence and chinese information processing, ministry of education, +taiyuan, china +tel.: +86 0351-7010566 +e-mail: wjhwjh@sxu.edu.cn +s. xu +school of computer and information technology, shanxi university, taiyuan, china +e-mail: xushulianghao@126.com +b. duan +school of computer and information technology, shanxi university, taiyuan, china +e-mail: 6206486@qq.com +c. liu +school of computer science and technology, faculty of electronic information and electrical engineering, dalian +university of technology, dalian, china +e-mail: liucaifeng12345@qq.com +j. liang +key laboratory of computational intelligence and chinese information processing, ministry of education, taiyuan, +china +e-mail: ljy@sxu.edu.cn",8 +"abstract. we determine the product structure on hochschild cohomology +of commutative algebras in low degrees, obtaining the answer in all degrees +for complete intersection algebras. as applications, we consider cyclic extension algebras as well as hochschild and ordinary cohomology of finite abelian +groups.",0 +"abstract +recently, multilayer bootstrap network (mbn) has demonstrated promising performance +in unsupervised dimensionality reduction. it can learn compact representations in standard data sets, i.e. mnist and rcv1. however, as a bootstrap method, the prediction +complexity of mbn is high. in this paper, we propose an unsupervised model compression +framework for this general problem of unsupervised bootstrap methods. the framework +compresses a large unsupervised bootstrap model into a small model by taking the bootstrap model and its application together as a black box and learning a mapping function +from the input of the bootstrap model to the output of the application by a supervised +learner. to specialize the framework, we propose a new technique, named compressive +mbn. it takes mbn as the unsupervised bootstrap model and deep neural network (dnn) +as the supervised learner. our initial result on mnist showed that compressive mbn +not only maintains the high prediction accuracy of mbn but also is over thousands of +times faster than mbn at the prediction stage. our result suggests that the new technique integrates the effectiveness of mbn on unsupervised learning and the effectiveness +and efficiency of dnn on supervised learning together for the effectiveness and efficiency +of compressive mbn on unsupervised learning. +keywords: model compression, multilayer bootstrap networks, unsupervised learning.",9 +"abstract +we study the problem of approximating the largest root of a real-rooted polynomial of degree +n using its top k coefficients and give nearly matching upper and lower bounds. we present +algorithms with running time polynomial in k that use the top k coefficients to approximate +the maximum root within a factor of n1/k and 1 + o( logk n )2 when k ≤ log n and k > log n +respectively. we also prove corresponding information-theoretic lower bounds of nω(1/k) and + 2n 2 +log k +, and show strong lower bounds for noisy version of the problem in which one is +1+ω +k +given access to approximate coefficients. +this problem has applications in the context of the method of interlacing families of polynomials, which was used for proving the existence of ramanujan graphs of all degrees, the solution +of the kadison-singer problem, and bounding the integrality gap of the asymmetric traveling +salesman problem. all of these involve computing the maximum root of certain real-rooted +polynomials for which the top few coefficients are√accessible in subexponential time. our results +3 +yield an algorithm with the running time of 2õ( n) for all of them.",8 +"abstract. +we examine the issue of stability of probability in reasoning about complex systems with +uncertainty in structure. normally, propositions are viewed as probability functions on an +abstract random graph where it is implicitly assumed that the nodes of the graph have stable +properties. but what if some of the nodes change their characteristics? this is a situation that +cannot be covered by abstractions of either static or dynamic sets when these changes take +place at regular intervals. we propose the use of sets with elements that change, and modular +forms are proposed to account for one type of such change. an expression for the dependence +of the mean on the probability of the switching elements has been determined. the system is +also analyzed from the perspective of decision between different hypotheses. such sets are +likely to be of use in complex system queries and in analysis of surveys.",2 +"abstract +in this work, a strategy to estimate the information transfer between the elements +of a complex system, from the time series associated to the evolution of this elements, +is presented. by using the nearest neighbors of each state, the local approaches of the +deterministic dynamical rule generating the data and the probability density functions, +both marginals and conditionals, necessaries to calculate some measures of information +transfer, are estimated. +the performance of the method using numerically simulated data and real signals is +exposed.",7 +"abstract +we discuss from a practical point of view a number of issues involved in writing distributed +internet and www applications using lp/clp systems. we describe pillow, a publicdomain internet and www programming library for lp/clp systems that we have +designed in order to simplify the process of writing such applications. pillow provides +facilities for accessing documents and code on the www; parsing, manipulating and +generating html and xml structured documents and data; producing html forms; +writing form handlers and cgi-scripts; and processing html/xml templates. an important contribution of pillow is to model html/xml code (and, thus, the content +of www pages) as terms. the pillow library has been developed in the context of the +ciao prolog system, but it has been adapted to a number of popular lp/clp systems, +supporting most of its functionality. we also describe the use of concurrency and a highlevel model of client-server interaction, ciao prolog’s active modules, in the context of +www programming. we propose a solution for client-side downloading and execution of +prolog code, using generic browsers. finally, we also provide an overview of related work +on the topic. +keywords: www, html, xml, cgi, http, distributed execution, (constraint) +logic programming.",6 +"abstract. we show that call-by-need is observationally equivalent to +weak-head needed reduction. the proof of this result uses a semantical +argument based on a (non-idempotent) intersection type system called +v. interestingly, system v also allows to syntactically identify all the +weak-head needed redexes of a term.",6 +"abstract +pooling is a ubiquitous operation in image processing algorithms that allows for +higher-level processes to collect relevant low-level features from a region of interest. +currently, max-pooling is one of the most commonly used operators in the computational +literature. however, it can lack robustness to outliers due to the fact that it relies merely +on the peak of a function. pooling mechanisms are also present in the primate visual cortex where neurons of higher cortical areas pool signals from lower ones. the receptive +fields of these neurons have been shown to vary according to the contrast by aggregating +signals over a larger region in the presence of low contrast stimuli. we hypothesise that +this contrast-variant-pooling mechanism can address some of the shortcomings of maxpooling. we modelled this contrast variation through a histogram clipping in which the +percentage of pooled signal is inversely proportional to the local contrast of an image. +we tested our hypothesis by applying it to the phenomenon of colour constancy where a +number of popular algorithms utilise a max-pooling step (e.g. white-patch, grey-edge +and double-opponency). for each of these methods, we investigated the consequences +of replacing their original max-pooling by the proposed contrast-variant-pooling. our +experiments on three colour constancy benchmark datasets suggest that previous results +can significantly improve by adopting a contrast-variant-pooling mechanism.",1 +"abstract +a central question in the era of ’big data’ is what to do with the enormous amount of information. one possibility +is to characterize it through statistics, e.g., averages, or classify it using machine learning, in order to understand +the general structure of the overall data. the perspective in this paper is the opposite, namely that most of the value +in the information in some applications is in the parts that deviate from the average, that are unusual, atypical. we +define what we mean by ’atypical’ in an axiomatic way as data that can be encoded with fewer bits in itself rather +than using the code for the typical data. we show that this definition has good theoretical properties. we then develop +an implementation based on universal source coding, and apply this to a number of real world data sets. +index terms +big data, atypicality, minimum description length, data discovery, anomaly.",7 +"abstract +we present a confidence-based single-layer feed-forward learning algorithm s pi ral (spike regularized adaptive learning) relying on an encoding of activation +spikes. we adaptively update a weight vector relying on confidence estimates and +activation offsets relative to previous activity. we regularize updates proportionally +to item-level confidence and weight-specific support, loosely inspired by the observation from neurophysiology that high spike rates are sometimes accompanied +by low temporal precision. our experiments suggest that the new learning algorithm s piral is more robust and less prone to overfitting than both the averaged +perceptron and a row.",9 +"abstract +this essay discusses whether computers, using artificial intelligence (ai), could create art. the first part concerns ai-based tools for assisting with art making. the history +of technologies that automated aspects of art is covered, including photography and animation. in each case, we see initial fears and denial of the technology, followed by a +blossoming of new creative and professional opportunities for artists. the hype and reality +of artificial intelligence (ai) tools for art making is discussed, together with predictions +about how ai tools will be used. the second part speculates about whether it could ever +happen that ai systems could conceive of artwork, and be credited with authorship of an +artwork. it is theorized that art is something created by social agents, and so computers +cannot be credited with authorship of art in our current understanding. a few ways that +this could change are also hypothesized.",2 +"abstract +in this paper, we further develop the approach, originating in [13], to “computation-friendly” +hypothesis testing via convex programming. most of the existing results on hypothesis testing +aim to quantify in a closed analytic form separation between sets of distributions allowing for +reliable decision in precisely stated observation models. in contrast to this descriptive (and highly +instructive) traditional framework, the approach we promote here can be qualified as operational – +the testing routines and their risks are yielded by an efficient computation. all we know in advance +is that, under favorable circumstances, specified in [13], the risk of such test, whether high or low, +is provably near-optimal under the circumstances. as a compensation for the lack of “explanatory +power,” this approach is applicable to a much wider family of observation schemes and hypotheses +to be tested than those where “closed form descriptive analysis” is possible. +in the present paper our primary emphasis is on computation: we make a step further in +extending the principal tool developed in [13] – testing routines based on affine detectors – to a +large variety of testing problems. the price of this development is the loss of blanket near-optimality +of the proposed procedures (though it is still preserved in the observation schemes studied in [13], +which now become particular cases of the general setting considered here).",10 +"abstract. let r be a commutative noetherian local ring and let a be a proper +ideal of r. in this paper, as a main result, it is shown that if m is a gorenstein +r-module with c = ht m a, then hia (m ) = 0 for all i 6= c is completely encoded in +homological properties of hca (m ), in particular in its bass numbers. notice that, +this result provides a generalization of a result of hellus and schenzel which has +been proved before, as a main result, in the case where m = r.",0 +"abstract +the discrete-time distributed bayesian filtering (dbf) algorithm is presented for the problem of tracking a target dynamic +model using a time-varying network of heterogeneous sensing agents. in the dbf algorithm, the sensing agents combine their +normalized likelihood functions in a distributed manner using the logarithmic opinion pool and the dynamic average consensus +algorithm. we show that each agent’s estimated likelihood function globally exponentially converges to an error ball centered +on the joint likelihood function of the centralized multi-sensor bayesian filtering algorithm. we rigorously characterize the +convergence, stability, and robustness properties of the dbf algorithm. moreover, we provide an explicit bound on the time +step size of the dbf algorithm that depends on the time-scale of the target dynamics, the desired convergence error bound, +and the modeling and communication error bounds. furthermore, the dbf algorithm for linear-gaussian models is cast into +a modified form of the kalman information filter. the performance and robust properties of the dbf algorithm are validated +using numerical simulations. +key words: bayesian filtering, distributed estimation, sensor network, data fusion, logarithmic opinion pool.",7 +"abstract +in this article, we discuss a flow–sensitive analysis of equality relationships +for imperative programs. we describe its semantic domains, general purpose operations over abstract computational states (term evaluation and identification, +semantic completion, widening operator, etc.) and semantic transformers corresponding to program constructs. we summarize our experiences from the last +few years concerning this analysis and give attention to applications of analysis of +automatically generated code. among other illustrating examples, we consider a +program for which the analysis diverges without a widening operator and results +of analyzing residual programs produced by some automatic partial evaluator. an +example of analysis of a program generated by this evaluator is given. +keywords: abstract interpretation, value numbering, equality relationships for +program terms, formal grammars, semantic transformers, widening operator, automatically generated programs.",2 +"abstract +in this thesis we present a new algorithm for the vehicle routing problem called the enhanced bees algorithm. it is adapted +from a fairly recent algorithm, the bees algorithm, which was +developed for continuous optimisation problems. we show that +the results obtained by the enhanced bees algorithm are competitive with the best meta-heuristics available for the vehicle +routing problem—it is able to achieve results that are within +0.5% of the optimal solution on a commonly used set of test +instances. we show that the algorithm has good runtime performance, producing results within 2% of the optimal solution +within 60 seconds, making it suitable for use within real world +dispatch scenarios. additionally, we provide a short history of +well known results from the literature along with a detailed description of the foundational methods developed to solve the +vehicle routing problem.",9 +"abstract +fuel efficient homogeneous charge compression ignition (hcci) engine combustion timing predictions must contend +with non-linear chemistry, non-linear physics, period doubling bifurcation(s), turbulent mixing, model parameters that +can drift day-to-day, and air-fuel mixture state information that cannot typically be resolved on a cycle-to-cycle basis, +especially during transients. in previous work, an abstract cycle-to-cycle mapping function coupled with 𝜖-support +vector regression was shown to predict experimentally observed cycle-to-cycle combustion timing over a wide range of +engine conditions, despite some of the aforementioned difficulties. the main limitation of the previous approach was +that a partially acausual randomly sampled training dataset was used to train proof of concept offline predictions. the +objective of this paper is to address this limitation by proposing a new online adaptive extreme learning machine (elm) +extension named weighted ring-elm. this extension enables fully causal combustion timing predictions at randomly +chosen engine set points, and is shown to achieve results that are as good as or better than the previous offline method. +the broader objective of this approach is to enable a new class of real-time model predictive control strategies for high +variability hcci and, ultimately, to bring hcci’s low engine-out nox and reduced co2 emissions to production engines. +keywords: non-linear, non-stationary, time series, chaos theory, dynamical system, adaptive extreme learning machine",5 +"abstract: we study the regularity properties of gaussian fields defined +over spheres cross time. in particular, we consider two alternative spectral +decompositions for a gaussian field on sd × r. for each decomposition, we +establish regularity properties through sobolev and interpolation spaces. +we then propose a simulation method and study its level of accuracy in +the l2 sense. the method turns to be both fast and efficient. +msc 2010 subject classifications: primary 60g60, 60g17, 41a25; secondary 60g15, 33c55, 46e35, 33c45. +keywords and phrases: gaussian random fields, global data, big data, +space-time covariance, karhunen-loève expansion, spherical harmonics functions, schoenberg’s functions. +∗ supported",10 +"abstract +in this paper, we address the problem of sampling from a set and reconstructing a set stored +as a bloom filter. to the best of our knowledge our work is the first to address this question. +we introduce a novel hierarchical data structure called bloomsampletree that helps us design +efficient algorithms to extract an almost uniform sample from the set stored in a bloom filter and +also allows us to reconstruct the set efficiently. in the case where the hash functions used in the +bloom filter implementation are partially invertible, in the sense that it is easy to calculate the +set of elements that map to a particular hash value, we propose a second, more space-efficient +method called hashinvert for the reconstruction. we study the properties of these two methods +both analytically as well as experimentally. we provide bounds on run times for both methods +and sample quality for the bloomsampletree based algorithm, and show through an extensive +experimental evaluation that our methods are efficient and effective.",8 +"abstract +a (δ ≥ k1 , δ ≥ k2 )-partition of a graph g is a vertex-partition (v1 , v2 ) of g satisfying that +δ(g[vi ]) ≥ ki for i = 1, 2. we determine, for all positive integers k1 , k2 , the complexity of deciding +whether a given graph has a (δ ≥ k1 , δ ≥ k2 )-partition. +we also address the problem of finding a function g(k1 , k2 ) such that the (δ ≥ k1 , δ ≥ k2 )-partition +problem is n p-complete for the class of graphs of minimum degree less than g(k1 , k2 ) and polynomial for all graphs with minimum degree at least g(k1 , k2 ). we prove that g(1, k) = k for k ≥ 3, +that g(2, 2) = 3 and that g(2, 3), if it exists, has value 4 or 5. +keywords: n p-complete, polynomial, 2-partition, minimum degree.",8 +"abstract +we re-investigate a fundamental question: how effective is crossover in genetic algorithms in combining building blocks of good solutions? although this has been +discussed controversially for decades, we are still lacking a rigorous and intuitive answer. we provide such answers for royal road functions and o ne m ax, where every bit is a building block. for the latter we show that using crossover makes every +(µ+λ) genetic algorithm at least twice as fast as the fastest evolutionary algorithm using only standard bit mutation, up to small-order terms and for moderate µ and λ. +crossover is beneficial because it effectively turns fitness-neutral mutations into improvements by combining the right building blocks at a later stage. compared to +mutation-based evolutionary algorithms, this makes multi-bit mutations more useful. introducing +crossover changes the optimal mutation rate on o ne m ax from 1/n +√ +to (1 + 5)/2 · 1/n ≈ 1.618/n. this holds both for uniform crossover and k-point +crossover. experiments and statistical tests confirm that our findings apply to a broad +class of building-block functions. +keywords +genetic algorithms, crossover, recombination, mutation rate, runtime analysis, theory.",9 +"abstract +in hyperspectral images, some spectral bands suffer from low signal-to-noise ratio due to noisy acquisition and atmospheric +effects, thus requiring robust techniques for the unmixing problem. this paper presents a robust supervised spectral unmixing +approach for hyperspectral images. the robustness is achieved by writing the unmixing problem as the maximization of the +correntropy criterion subject to the most commonly used constraints. two unmixing problems are derived: the first problem +considers the fully-constrained unmixing, with both the non-negativity and sum-to-one constraints, while the second one deals with +the non-negativity and the sparsity-promoting of the abundances. the corresponding optimization problems are solved efficiently +using an alternating direction method of multipliers (admm) approach. experiments on synthetic and real hyperspectral images +validate the performance of the proposed algorithms for different scenarios, demonstrating that the correntropy-based unmixing +is robust to outlier bands.",9 +"abstract. binomial ideals are special polynomial ideals with many algorithmically +and theoretically nice properties. we discuss the problem of deciding if a given polynomial ideal is binomial. while the methods are general, our main motivation and +source of examples is the simplification of steady state equations of chemical reaction +networks. for homogeneous ideals we give an efficient, gröbner-free algorithm for +binomiality detection, based on linear algebra only. on inhomogeneous input the algorithm can only give a sufficient condition for binomiality. as a remedy we construct a +heuristic toolbox that can lead to simplifications even if the given ideal is not binomial.",0 +"abstract +repetitive scenario design (rsd) is a randomized approach to robust design based on iterating two phases: a +standard scenario design phase that uses n scenarios (design samples), followed by randomized feasibility phase that +uses no test samples on the scenario solution. we give a full and exact probabilistic characterization of the number +of iterations required by the rsd approach for returning a solution, as a function of n , no , and of the desired levels +of probabilistic robustness in the solution. this novel approach broadens the applicability of the scenario technology, +since the user is now presented with a clear tradeoff between the number n of design samples and the ensuing +expected number of repetitions required by the rsd algorithm. the plain (one-shot) scenario design becomes just +one of the possibilities, sitting at one extreme of the tradeoff curve, in which one insists in finding a solution in a +single repetition: this comes at the cost of possibly high n . other possibilities along the tradeoff curve use lower n +values, but possibly require more than one repetition. +keywords +scenario design, probabilistic robustness, randomized algorithms, random convex programs.",3 +"abstract—active transport is sought in molecular communication to extend coverage, improve reliability, and mitigate +interference. one such active mechanism inherent to many liquid +environments is fluid flow. flow models are often over-simplified, +e.g., assuming one-dimensional diffusion with constant drift. +however, diffusion and flow are usually encountered in threedimensional bounded environments where the flow is highly +non-uniform such as in blood vessels or microfluidic channels. +for a qualitative understanding of the relevant physical effects +inherent to these channels a systematic framework is provided +based on the péclet number and the ratio of transmitter-receiver +distance to duct radius. we review the relevant laws of physics +and highlight when simplified models of uniform flow and +advection-only transport are applicable. for several molecular +communication setups, we highlight the effect of different flow +scenarios on the channel impulse response.",7 +"abstract +in this paper, we develop new tools and connections for exponential time approximation. in this +setting, we are given a problem instance and a parameter α > 1, and the goal is to design an +α-approximation algorithm with the fastest possible running time. we show the following results: +an r-approximation for maximum independent set in o∗ (exp(õ(n/r log2 r + r log2 r))) time, +an r-approximation for chromatic number in o∗ (exp(õ(n/r log r + r log2 r))) time, +a (2 − 1/r)-approximation for minimum vertex cover in o∗ (exp(n/rω(r) )) time, and +a (k − 1/r)-approximation for minimum k-hypergraph vertex cover in o∗ (exp(n/(kr)ω(kr) )) +time. +(throughout, õ and o∗ omit polyloglog(r) and factors polynomial in the input size, respectively.) +the best known time bounds for all problems were o∗ (2n/r ) [bourgeois et al. 2009, 2011 & +cygan et al. 2008]. for maximum independent set and chromatic number, these bounds were +complemented by exp(n1−o(1) /r1+o(1) ) lower bounds (under the exponential time hypothesis +(eth)) [chalermsook et al., 2013 & laekhanukit, 2014 (ph.d. thesis)]. our results show that +the naturally-looking o∗ (2n/r ) bounds are not tight for all these problems. the key to these +algorithmic results is a sparsification procedure that reduces a problem to its bounded-degree +variant, allowing the use of better approximation algorithms for bounded degree graphs. for +obtaining the first two results, we introduce a new randomized branching rule. +finally, we show a connection between pcp parameters and exponential-time approximation +algorithms. this connection together with our independent set algorithm refute the possibility +to overly reduce the size of chan’s pcp [chan, 2016]. it also implies that a (significant) improvement over our result will refute the gap-eth conjecture [dinur 2016 & manurangsi and +raghavendra, 2016]. +1. +2. +3. +4.",8 +"abstract. this is a plea for the reopening of the building site for the +classification of finite simple groups in order to include the finite simple +hypergroups. +hypergroups were first introduced by frédéric marty, in 1934, at a +congress in stockholm, not to be confused with a later and quite different notion to which the same name was given, inopportunely. +i am well aware that, probably, quite a few mathematicians must have +already felt uncomfortable about the presence of the so-called sporadic +simple groups in the large tableau of the classification of finite simple +groups, and might have wrote about it, though i do not have any +reference to mention. +in what follows, i will try to explain, step by step, what a hypergroup +is, and, then, suggest a notion of simplicity for hypergroups, in a simple +and natural way, to match the notion in the case of groups, hoping it +will be fruitful. +examples and constructions are included.",4 +"abstract—mobile edge computing (mec) is expected to +be an effective solution to deliver 360-degree virtual reality +(vr) videos over wireless networks. in contrast to previous +computation-constrained mec framework, which reduces the +computation-resource consumption at the mobile vr device +by increasing the communication-resource consumption, we develop a communications-constrained mec framework to reduce communication-resource consumption by increasing the +computation-resource consumption and exploiting the caching +resources at the mobile vr device in this paper. specifically, +according to the task modularization, the mec server can only +deliver the components which have not been stored in the vr +device, and then the vr device uses the received components +and the corresponding cached components to construct the task, +resulting in low communication-resource consumption but high +delay. the mec server can also compute the task by itself to +reduce the delay, however, it consumes more communicationresource due to the delivery of entire task. therefore, we then +propose a task scheduling strategy to decide which computation +model should the mec server operates, in order to minimize +the communication-resource consumption under the delay constraint. finally, we discuss the tradeoffs between communications, +computing, and caching in the proposed system.",7 +"abstract. span program is a linear-algebraic model of computation +which can be used to design quantum algorithms. for any boolean function there exists a span program that leads to a quantum algorithm +with optimal quantum query complexity. in general, finding such span +programs is not an easy task. +in this work, given a query access to the adjacency matrix of a simple graph g with n vertices, we provide two new span-program-based +quantum algorithms: +√ +– an algorithm for testing if the graph is bipartite that uses o(n n) +quantum queries; +√ +– an algorithm for testing if the graph is connected that uses o(n n) +quantum queries.",8 +"abstract—we consider the problem of decentralized hypothesis +testing in a network of energy harvesting sensors, where sensors +make noisy observations of a phenomenon and send quantized +information about the phenomenon towards a fusion center. the +fusion center makes a decision about the present hypothesis +using the aggregate received data during a time interval. we +explicitly consider a scenario under which the messages are sent +through parallel access channels towards the fusion center. to +avoid limited lifetime issues, we assume each sensor is capable +of harvesting all the energy it needs for the communication from +the environment. each sensor has an energy buffer (battery) to +save its harvested energy for use in other time intervals. our +key contribution is to formulate the problem of decentralized +detection in a sensor network with energy harvesting devices. our +analysis is based on a queuing-theoretic model for the battery +and we propose a sensor decision design method by considering +long term energy management at the sensors. we show how +the performance of the system changes for different battery +capacities. we then numerically show how our findings can be +used in the design of sensor networks with energy harvesting +sensors.",7 +"abstract. for a tree g, we study the changing behaviors in the homology groups hi (bn g) +as n varies, where bn g := π1 (uconf n (g)). we prove that the ranks of these homologies +can be described by a single polynomial for all n, and construct this polynomiallexplicitly in +terms of invariants of the tree g. to accomplish this we prove that the group n hi (bn g) +can be endowed with the structure of a finitely generated graded module over an integral +polynomial ring, and further prove that it naturally decomposes as a direct sum of graded +shifts of squarefree monomial ideals. following this, we spend time considering how our +methods might be generalized to braid groups of arbitrary graphs, and make various conjectures in this direction.",0 +"abstract +timing guarantees are crucial to cyber-physical applications that must bound the end-to-end +delay between sensing, processing and actuation. for example, in a flight controller for a multirotor drone, the data from a gyro or inertial sensor must be gathered and processed to determine +the attitude of the aircraft. sensor data fusion is followed by control decisions that adjust the +flight of a drone by altering motor speeds. if the processing pipeline between sensor input and +actuation is not bounded, the drone will lose control and possibly fail to maintain flight. +motivated by the implementation of a multithreaded drone flight controller on the quest +rtos, we develop a composable pipe model based on the system’s task, scheduling and communication abstractions. this pipe model is used to analyze two semantics of end-to-end time: +reaction time and freshness time. we also argue that end-to-end timing properties should be +factored in at the early stage of application design. thus, we provide a mathematical framework to derive feasible task periods that satisfy both a given set of end-to-end timing constraints +and the schedulability requirement. we demonstrate the applicability of our design approach +by using it to port the cleanflight flight controller firmware to quest on the intel aero board. +experiments show that cleanflight ported to quest is able to achieve end-to-end latencies within +the predicted time bounds derived by analysis. +1998 acm subject classification c.3 real-time and embedded systems +keywords and phrases real-time systems, end-to-end timing analysis, flight controller",3 +"abstract +the primary objective of this paper is to revisit and make a case for +the merits of r.a. fisher’s objections to the decision-theoretic framing of +frequentist inference. it is argued that this framing is congruent with the +bayesian but incongruent with the frequentist inference. it provides the +bayesian approach with a theory of optimal inference, but it misrepresents +the theory of optimal frequentist inference by framing inferences solely in +terms of the universal quantifier ‘for all values of θ in θ’, denoted by +∀θ∈θ. this framing is at odds with the primary objective of modelbased frequentist inference, which is to learn from data about the true +value θ∗ ; the one that gave rise to the particular data. the frequentist +approach relies on factual (estimation, prediction), as well as hypothetical +(testing) reasoning, both of which revolve around the existential quantifier +∃θ∗ ∈θ. the paper calls into question the appropriateness of admissibility +and reassesses stein’s paradox as it relates to the capacity of frequentist +estimators to pinpoint θ∗ . the paper also compares and contrasts lossbased errors with traditional frequentist errors, such as coverage, type +i and ii; the former are attached to θ, but the latter to the inference +procedure itself. +key words: decision theoretic framing; bayesian vs. frequentist +inference; stein’s paradox; james-stein estimator; loss functions; admissibility; error probabilities; risk functions",10 +"abstract +facial expression recognition (fer) has always +been a challenging issue in computer vision. the +different expressions of emotion and uncontrolled +environmental factors lead to inconsistencies in the +complexity of fer and variability of between expression categories, which is often overlooked in +most facial expression recognition systems. in order +to solve this problem effectively, we presented a +simple and efficient cnn model to extract facial +features, and proposed a complexity perception +classification (cpc) algorithm for fer. the cpc +algorithm divided the dataset into an easy classification sample subspace and a complex classification +sample subspace by evaluating the complexity of +facial features that are suitable for classification. +the experimental results of our proposed algorithm +on fer2013 and ck+ datasets demonstrated the +algorithm’s effectiveness and superiority over other +state-of-the-art approaches.",2 +"abstract +in this paper, the efficient deployment and mobility of multiple unmanned aerial vehicles (uavs), +used as aerial base stations to collect data from ground internet of things (iot) devices, is investigated. +in particular, to enable reliable uplink communications for iot devices with a minimum total transmit +power, a novel framework is proposed for jointly optimizing the three-dimensional (3d) placement and +mobility of the uavs, device-uav association, and uplink power control. first, given the locations of +active iot devices at each time instant, the optimal uavs’ locations and associations are determined. +next, to dynamically serve the iot devices in a time-varying network, the optimal mobility patterns +of the uavs are analyzed. to this end, based on the activation process of the iot devices, the time +instances at which the uavs must update their locations are derived. moreover, the optimal 3d trajectory +of each uav is obtained in a way that the total energy used for the mobility of the uavs is minimized +while serving the iot devices. simulation results show that, using the proposed approach, the total +transmit power of the iot devices is reduced by 45% compared to a case in which stationary aerial +base stations are deployed. in addition, the proposed approach can yield a maximum of 28% enhanced +system reliability compared to the stationary case. the results also reveal an inherent tradeoff between +the number of update times, the mobility of the uavs, and the transmit power of the iot devices. in +essence, a higher number of updates can lead to lower transmit powers for the iot devices at the cost +of an increased mobility for the uavs.",7 +"abstract +in this work we study a weak order ideal associated with the coset +leaders of a non-binary linear code. this set allows the incrementally +computation of the coset leaders and the definitions of the set of leader +codewords. this set of codewords has some nice properties related to +the monotonicity of the weight compatible order on the generalized +support of a vector in fnq which allows to describe a test set, a trial set +and the set of zero neighbours of a linear code in terms of the leader +codewords.",7 +"abstract. we propose a novel method that uses convolutional neural networks +(cnns) for feature extraction. not just limited to conventional spatial domain representation, we use multilevel 2d discrete haar wavelet transform, where image representations are scaled to a variety of different sizes. these are then used to train +different cnns to select features. to be precise, we use 10 different cnns that select +a set of 10240 features, i.e. 1024/cnn. with this, 11 different handwritten scripts +are identified, where 1k words per script are used. in our test, we have achieved the +maximum script identification rate of 94.73% using multi-layer perceptron (mlp). +our results outperform the state-of-the-art techniques. +keywords: convolutional neural network, deep learning, multi-layer perceptron, +discrete wavelet transform, indic script identification",1 +"abstract +steady states of alternating-current (ac) circuits have been studied +in considerable detail. in 1982, baillieul and byrnes derived an upper +bound on the number of steady states in a loss-less ac circuit [ieee +tcas, 29(11): 724–737] and conjectured that this bound holds for ac +circuits in general. we prove this is indeed the case, among other results, +by studying a certain multi-homogeneous structure in an algebraisation.",5 +"abstract +this is the second in a series of papers on implementing a discontinuous galerkin (dg) method as an open source +matlab / gnu octave toolbox. the intention of this ongoing project is to offer a rapid prototyping package for +application development using dg methods. the implementation relies on fully vectorized matrix / vector operations +and is comprehensively documented. particular attention was paid to maintaining a direct mapping between discretization terms and code routines as well as to supporting the full code functionality in gnu octave. the present +work focuses on a two-dimensional time-dependent linear advection equation with space / time-varying coefficients, +and provides a general order implementation of several slope limiting schemes for the dg method. +keywords: matlab, gnu octave, discontinuous galerkin method, slope limiting, vectorization, open source, +advection operator",5 +"abstract. we study the problem of constructing phylogenetic trees for +a given set of species. the problem is formulated as that of finding a +minimum steiner tree on n points over the boolean hypercube of dimension d. it is known that an optimal tree can be found in linear time [1] +if the given dataset has a perfect phylogeny, i.e. cost of the optimal phylogeny is exactly d. moreover, if the data has a near-perfect phylogeny, +i.e. the cost of the optimal steiner tree is d + q, it is known [2] that an +exact solution can be found in running time which is polynomial in the +number of species and d, yet exponential in q. in this work, we give a +polynomial-time algorithm (in both d and q) that finds a phylogenetic +tree of cost d+o(q 2 ). this provides the best guarantees known—namely, +√ +a (1 + o(1))-approximation—for the case log(d) ≪ q ≪ d, broadening +the range of settings for which near-optimal solutions can be efficiently +found. we also discuss the motivation and reasoning for studying such +additive approximations.",5 +"abstract—the visual focus of attention (vfoa) has been +recognized as a prominent conversational cue. we are interested +in estimating and tracking the vfoas associated with multiparty social interactions. we note that in this type of situations +the participants either look at each other or at an object of +interest; therefore their eyes are not always visible. consequently +both gaze and vfoa estimation cannot be based on eye detection +and tracking. we propose a method that exploits the correlation +between eye gaze and head movements. both vfoa and gaze +are modeled as latent variables in a bayesian switching statespace model. the proposed formulation leads to a tractable +learning procedure and to an efficient algorithm that simultaneously tracks gaze and visual focus. the method is tested and +benchmarked using two publicly available datasets that contain +typical multi-party human-robot and human-human interactions. +index terms—visual focus of attention, eye gaze, head pose, +dynamic bayesian model, switching kalman filter, multi-party +dialog, human-robot interaction.",1 +"abstract molecular fingerprints, i.e. feature vectors describing atomistic neighborhood configurations, +is an important abstraction and a key ingredient for data-driven modeling of potential energy surface +and interatomic force. in this paper, we present the density-encoded canonically aligned fingerprint +(decaf) fingerprint algorithm, which is robust and efficient, for fitting per-atom scalar and vector +quantities. the fingerprint is essentially a continuous density field formed through the superimposition +of smoothing kernels centered on the atoms. rotational invariance of the fingerprint is achieved by +aligning, for each fingerprint instance, the neighboring atoms onto a local canonical coordinate frame +computed from a kernel minisum optimization procedure. we show that this approach is superior over +pca-based methods especially when the atomistic neighborhood is sparse and/or contains symmetry. +we propose that the ‘distance’ between the density fields be measured using a volume integral of their +pointwise difference. this can be efficiently computed using optimal quadrature rules, which only require +discrete sampling at a small number of grid points. we also experiment on the choice of weight functions +for constructing the density fields, and characterize their performance for fitting interatomic potentials. +the applicability of the fingerprint is demonstrated through a set of benchmark problems. +keywords: active learning, gaussian process regression, quantum mechanics, molecular dynamics, +next generation force fields",5 +"abstract +in this paper we analyse the benefits of incorporating interval-valued fuzzy +sets into the bousi-prolog system. a syntax, declarative semantics and implementation for this extension is presented and formalised. we show, by +using potential applications, that fuzzy logic programming frameworks enhanced with them can correctly work together with lexical resources and +ontologies in order to improve their capabilities for knowledge representation +and reasoning. +keywords: interval-valued fuzzy sets, approximate reasoning, lexical knowledge resources, fuzzy logic programming, fuzzy prolog. +1. introduction and motivation +nowadays, lexical knowledge resources as well as ontologies of concepts +are widely employed for modelling domain independent knowledge [1, 2] or +email address: clrubio@ubiobio.cl (clemente rubio-manzano) +preprint submitted to summited to studies of computational intelligent seriesnovember 16, 2017",2 +"abstract. the entropy of a finite probability space x measures the observable cardinality of large independent products x ⊗n of the probability +space. if two probability spaces x and y have the same entropy, there is an +almost measure-preserving bijection between large parts of x ⊗n and y ⊗n . +in this way, x and y are asymptotically equivalent. +it turns out to be challenging to generalize this notion of asymptotic +equivalence to configurations of probability spaces, which are collections of +probability spaces with measure-preserving maps between some of them. +in this article we introduce the intrinsic kolmogorov-sinai distance on +the space of configurations of probability spaces. concentrating on the +large-scale geometry we pass to the asymptotic kolmogorov-sinai distance. +it induces an asymptotic equivalence relation on sequences of configurations +of probability spaces. we will call the equivalence classes tropical probability +spaces. +in this context we prove an asymptotic equipartition property for configurations. it states that tropical configurations can always be approximated by homogeneous configurations. in addition, we show that the +solutions to certain information-optimization problems are lipschitz-continuous with respect to the asymptotic kolmogorov-sinai distance. it follows from these two statements that in order to solve an informationoptimization problem, it suffices to consider homogeneous configurations. +finally, we show that spaces of trajectories of length n of certain stochastic processes, in particular stationary markov chains, have a tropical +limit.",7 +"abstract +in functional logic programs, rules are applicable independently of textual order, i.e., any +rule can potentially be used to evaluate an expression. this is similar to logic languages +and contrary to functional languages, e.g., haskell enforces a strict sequential interpretation of rules. however, in some situations it is convenient to express alternatives by means +of compact default rules. although default rules are often used in functional programs, the +non-deterministic nature of functional logic programs does not allow to directly transfer +this concept from functional to functional logic languages in a meaningful way. in this paper we propose a new concept of default rules for curry that supports a programming style +similar to functional programming while preserving the core properties of functional logic +programming, i.e., completeness, non-determinism, and logic-oriented use of functions. we +discuss the basic concept and propose an implementation which exploits advanced features +of functional logic languages. +to appear in theory and practice of logic programming (tplp) +keywords: functional logic programming, semantics, program transformation",6 +"abstract. we develop new computational methods for studying potential counterexamples to the andrews–curtis conjecture, in particular, +akbulut–kurby examples ak(n). we devise a number of algorithms in +an attempt to disprove the most interesting counterexample ak(3). to +improve metric properties of the search space (which is a set of balanced +presentations of 1) we introduce a new transformation (called an acmmove here) that generalizes the original andrews-curtis transformations +and discuss details of a practical implementation. to reduce growth of +the search space we introduce a strong equivalence relation on balanced +presentations and study the space modulo automorphisms of the underlying free group. finally, we prove that automorphism-moves can be +applied to ak(n)-presentations. unfortunately, despite a lot of effort +we were unable to trivialize any of ak(n)-presentations, for n > 2. +keywords. andrews-curtis conjecture, akbulut-kurby presentations, +trivial group, conjugacy search problem, computations. +2010 mathematics subject classification. 20-04, 20f05, 20e05.",4 +"abstract. we show that in any q-gorenstein flat family of klt singularities, normalized volumes are lower semicontinuous with respect to the zariski topology. a quick +consequence is that smooth points have the largest normalized volume among all klt +singularities. using an alternative characterization of k-semistability developed by li, +liu and xu, we show that k-semistability is a very generic or empty condition in any +q-gorenstein flat family of log fano pairs.",0 +"abstract. the critical ideals of a graph are the determinantal ideals of the generalized laplacian +matrix associated to a graph. in this article we provide a set of minimal forbidden graphs for the set +of graphs with at most three trivial critical ideals. then we use these forbidden graphs to characterize +the graphs with at most three trivial critical ideals and clique number equal to 2 and 3.",0 +"abstract +when applied to training deep neural networks, stochastic gradient descent (sgd) +often incurs steady progression phases, interrupted by catastrophic episodes in +which loss and gradient norm explode. a possible mitigation of such events is to +slow down the learning process. +this paper presents a novel approach to control the sgd learning rate, that uses +two statistical tests. the first one, aimed at fast learning, compares the momentum +of the normalized gradient vectors to that of random unit vectors and accordingly +gracefully increases or decreases the learning rate. the second one is a change +point detection test, aimed at the detection of catastrophic learning episodes; upon +its triggering the learning rate is instantly halved. +both abilities of speeding up and slowing down the learning rate allows the proposed approach, called sal e ra, to learn as fast as possible but not faster. experiments on standard benchmarks show that sal e ra performs well in practice, and +compares favorably to the state of the art. +machine learning (ml) algorithms require efficient optimization techniques, whether to solve convex +problems (e.g., for svms), or non-convex ones (e.g., for neural networks). in the convex setting, +the main focus is on the order of the convergence rate [nesterov, 1983, defazio et al., 2014]. in +the non-convex case, ml is still more of an experimental science. significant efforts are devoted +to devising optimization algorithms (and robust default values for the associated hyper-parameters) +tailored to the typical regime of ml models and problem instances (e.g. deep convolutional neural +networks for mnist [le cun et al., 1998] or imagenet [deng et al., 2009]) [duchi et al., 2010, +zeiler, 2012, schaul et al., 2013, kingma and ba, 2014, tieleman and hinton, 2012]. +as the data size and the model dimensionality increase, mainstream convex optimization methods +are adversely affected. hessian-based approaches, which optimally handle convex optimization +problems however ill-conditioned they are, do not scale up and approximations are required [martens +et al., 2012]. overall, stochastic gradient descent (sgd) is increasingly adopted both in convex and +non-convex settings, with good performances and linear tractability [bottou and bousquet, 2008, +hardt et al., 2015]. +within the sgd framework, one of the main issues is to know how to control the learning rate: +the objective is to reach a satisfactory learning speed without triggering any catastrophic event, +manifested by the sudden rocketing of the training loss and the gradient norm. finding ""how much is +not too much"" in terms of learning rate is a slippery game. it depends both on the current state of the +system (the weight vector) and the current mini-batch. often, the eventual convergence of sgd is",9 +"abstract +online matching has received significant attention over the last 15 years due to its close +connection to internet advertising. as the seminal work of karp, vazirani, and vazirani has an +optimal (1−1/e) competitive ratio in the standard adversarial online model, much effort has gone +into developing useful online models that incorporate some stochasticity in the arrival process. +one such popular model is the “known i.i.d. model” where different customer-types arrive +online from a known distribution. we develop algorithms with improved competitive ratios for +some basic variants of this model with integral arrival rates, including: (a) the case of general +weighted edges, where we improve the best-known ratio of 0.667 due to haeupler, mirrokni and +zadimoghaddam [12] to 0.705; and (b) the vertex-weighted case, where we improve the 0.7250 +ratio of jaillet and lu [13] to 0.7299. +we also consider an extension of stochastic rewards, a variant where each edge has an +independent probability of being present. for the setting of stochastic rewards with non-integral +arrival rates, we present a simple optimal non-adaptive algorithm with a ratio of 1 − 1/e. for +the special case where each edge is unweighted and has a uniform constant probability of being +present, we improve upon 1 − 1/e by proposing a strengthened lp benchmark. +one of the key ingredients of our improvement is the following (offline) approach to bipartitematching polytopes with additional constraints. we first add several valid constraints in order +to get a good fractional solution f; however, these give us less control over the structure of +f. we next remove all these additional constraints and randomly move from f to a feasible +point on the matching polytope with all coordinates being from the set {0, 1/k, 2/k, . . . , 1} for +a chosen integer k. the structure of this solution is inspired by jaillet and lu (mathematics of +operations research, 2013) and is a tractable structure for algorithm design and analysis. the +appropriate random move preserves many of the removed constraints (approximately with high +probability and exactly in expectation). this underlies some of our improvements and could be +of independent interest. +∗",8 +"abstract +conditions for geometric ergodicity of multivariate autoregressive conditional heteroskedasticity (arch) processes, with the so-called bekk (baba, +engle, kraft, and kroner) parametrization, are considered. we show for a +class of bekk-arch processes that the invariant distribution is regularly +varying. in order to account for the possibility of different tail indices of the +marginals, we consider the notion of vector scaling regular variation (vsrv), +closely related to non-standard regular variation. the characterization of the +tail behavior of the processes is used for deriving the asymptotic properties +of the sample covariance matrices.",10 +"abstraction/multi-formalism co-simulation . +a.4 black-box co-simulation . . . . . . . . . . . . . . . . +a.5 real-time co-simulation . . . . . . . . . . . . . . . . +a.6 many simulation units: large scale co-simulation .",3 +"abstract interpretation +f rancesco r anzato f rancesco tapparo +dipartimento di matematica pura ed applicata, università di padova +via belzoni 7, 35131 padova, italy +franz@math.unipd.it tapparo@math.unipd.it",6 +"abstract +many metainterpreters found in the logic programming literature are nondeterministic in +the sense that the selection of program clauses is not determined. examples are the familiar +“demo” and “vanilla” metainterpreters. for some applications this nondeterminism is +convenient. in some cases, however, a deterministic metainterpreter, having an explicit +selection of clauses, is needed. such cases include (1) conversion of or parallelism into +and parallelism for “committed-choice” processors, (2) logic-based, imperative-language +implementation of search strategies, and (3) simulation of bounded-resource reasoning. +deterministic metainterpreters are difficult to write because the programmer must be +concerned about the set of unifiers of the children of a node in the derivation tree. we argue +that it is both possible and advantageous to write these metainterpreters by reasoning +in terms of object programs converted into a syntactically restricted form that we call +“chain” form, where we can forget about unification, except for unit clauses. we give two +transformations converting logic programs into chain form, one for “moded” programs +(implicit in two existing exhaustive-traversal methods for committed-choice execution), +and one for arbitrary definite programs. as illustrations of our approach we show examples +of the three applications mentioned above.",6 +"abstract— the paper addresses state estimation for discretetime systems with binary (threshold) measurements by following a maximum a posteriori probability (map) approach +and exploiting a moving horizon (mh) approximation of the +map cost-function. it is shown that, for a linear system +and noise distributions with log-concave probability density +function, the proposed mh-map state estimator involves the +solution, at each sampling interval, of a convex optimization +problem. application of the mh-map estimator to dynamic +estimation of a diffusion field given pointwise-in-time-and-space +binary measurements of the field is also illustrated and, finally, +simulation results relative to this application are shown to +demonstrate the effectiveness of the proposed approach.",3 +"abstract—compressive sensing has been successfully used for +optimized operations in wireless sensor networks. however, raw +data collected by sensors may be neither originally sparse nor +easily transformed into a sparse data representation. this paper +addresses the problem of transforming source data collected by +sensor nodes into a sparse representation with a few nonzero +elements. our contributions that address three major issues +include: 1) an effective method that extracts population sparsity +of the data, 2) a sparsity ratio guarantee scheme, and 3) a +customized learning algorithm of the sparsifying dictionary. we +introduce an unsupervised neural network to extract an intrinsic +sparse coding of the data. the sparse codes are generated at +the activation of the hidden layer using a sparsity nomination +constraint and a shrinking mechanism. our analysis using real +data samples shows that the proposed method outperforms +conventional sparsity-inducing methods. +abstract—sparse coding, compressive sensing, sparse autoencoders, wireless sensor networks.",9 +"abstract +in this paper, we formulate the deep residual network (resnet) as a control problem +of transport equation. in resnet, the transport equation is solved along the characteristics. based on this observation, deep neural network is closely related to the control +problem of pdes on manifold. we propose several models based on transport equation, hamilton-jacobi equation and fokker-planck equation. the discretization of these +pdes on point cloud is also discussed.",7 +"abstract—in video surveillance, face recognition (fr) systems +are employed to detect individuals of interest appearing over +a distributed network of cameras. the performance of still-tovideo fr systems can decline significantly because faces captured +in the unconstrained operational domain (od) over multiple +video cameras have a different underlying data distribution +compared to faces captured under controlled conditions in the +enrollment domain (ed) with a still camera. this is particularly +true when individuals are enrolled to the system using a single +reference still. to improve the robustness of these systems, it +is possible to augment the reference set by generating synthetic +faces based on the original still. however, without knowledge of +the od, many synthetic images must be generated to account +for all possible capture conditions. fr systems may therefore +require complex implementations and yield lower accuracy when +training on many less relevant images. this paper introduces an +algorithm for domain-specific face synthesis (dsfs) that exploits +the representative intra-class variation information available +from the od. prior to operation (during camera calibration), a +compact set of faces from unknown persons appearing in the od +is selected through affinity propagation clustering in the captured +condition space (defined by pose and illumination estimation). +the domain-specific variations of these face images are then +projected onto the reference still of each individual by integrating +an image-based face relighting technique inside the 3d morphable +model framework. a compact set of synthetic faces is generated +that resemble individuals of interest under capture conditions +relevant to the od. in a particular implementation based on +sparse representation classification, the synthetic faces generated +with the dsfs are employed to form a cross-domain dictionary +that accounts for structured sparsity where the dictionary blocks +combine the original and synthetic faces of each individual. +experimental results obtained with videos from the chokepoint +and cox-s2v datasets reveal that augmenting the reference +gallery set of still-to-video fr systems using the proposed dsfs +approach can provide a significantly higher level of accuracy +compared to state-of-the-art approaches, with only a moderate +increase in its computational complexity. +index terms—face recognition, single sample per person, +face synthesis, 3d face reconstruction, illumination transferring, sparse representation-based classification, video surveillance.",1 +"abstract +linear regression is one of the most prevalent +techniques in machine learning; however, it is +also common to use linear regression for its explanatory capabilities rather than label prediction. ordinary least squares (ols) is often used +in statistics to establish a correlation between +an attribute (e.g. gender) and a label (e.g. income) in the presence of other (potentially correlated) features. ols assumes a particular model +that randomly generates the data, and derives tvalues — representing the likelihood of each real +value to be the true correlation. using t-values, +ols can release a confidence interval, which is +an interval on the reals that is likely to contain +the true correlation; and when this interval does +not intersect the origin, we can reject the null hypothesis as it is likely that the true correlation +is non-zero. our work aims at achieving similar guarantees on data under differentially private estimators. first, we show that for wellspread data, the gaussian johnson-lindenstrauss +transform (jlt) gives a very good approximation of t-values; secondly, when jlt approximates ridge regression (linear regression with +l2 -regularization) we derive, under certain conditions, confidence intervals using the projected +data; lastly, we derive, under different conditions, +confidence intervals for the “analyze gauss” algorithm (dwork et al., 2014).",8 +"abstract +semidefinite programs have recently been developed for the problem +of community detection, which may be viewed as a special case of the +stochastic blockmodel. here, we develop a semidefinite program that can +be tailored to other instances of the blockmodel, such as non-assortative +networks and overlapping communities. we establish label recovery in +sparse settings, with conditions that are analogous to recent results for +community detection. in settings where the data is not generated by a +blockmodel, we give an oracle inequality that bounds excess risk relative to the best blockmodel approximation. simulations are presented for +community detection, for overlapping communities, and for latent space +models.",10 +"abstract +we study rank-1 l1-norm-based tucker2 (l1-tucker2) decomposition of 3-way tensors, treated as a +collection of n d × m matrices that are to be jointly decomposed. our contributions are as follows. i) we prove +that the problem is equivalent to combinatorial optimization over n antipodal-binary variables. ii) we derive the +first two algorithms in the literature for its exact solution. the first algorithm has cost exponential in n ; the second +one has cost polynomial in n (under a mild assumption). our algorithms are accompanied by formal complexity +analysis. iii) we conduct numerical studies to compare the performance of exact l1-tucker2 (proposed) with +standard hosvd, hooi, glram, pca, l1-pca, and tpca-l1. our studies show that l1-tucker2 outperforms +(in tensor approximation) all the above counterparts when the processed data are outlier corrupted.",8 +"abstract +a basic problem in spectral clustering is the following. if a solution obtained from the +spectral relaxation is close to an integral solution, is it possible to find this integral solution even +though they might be in completely different basis? in this paper, we propose a new spectral +clustering algorithm. it can recover +√ a k-partition such that the subspace corresponding to the +span of its indicator vectors is o( opt) close to the original subspace in spectral norm with +opt being the minimum possible (opt ≤ 1 always). moreover our algorithm does not impose +any restriction on the cluster sizes. previously, no algorithm was known which could find a +k-partition closer than o(k · opt). +we present two applications for our algorithm. first one finds a disjoint union of bounded +degree expanders which approximate a given graph in spectral norm. the second one is for +approximating the sparsest k-partition in a graph where each cluster have expansion at most +φk provided φk ≤ o(λk+1 ) where λk+1 is the (k + 1)st eigenvalue of laplacian matrix. this +significantly improves upon the previous algorithms, which required φk ≤ o(λk+1 /k).",8 +"abstract. we introduce quasi-prüfer ring extensions, in order +to relativize quasi-prüfer domains and to take also into account +some contexts in recent papers, where such extensions appear in a +hidden form. an extension is quasi-prüfer if and only if it is an inc +pair. the class of these extensions has nice stability properties. +we also define almost-prüfer extensions that are quasi-prüfer, the +converse being not true. quasi-prüfer extensions are closely linked +to finiteness properties of fibers. applications are given for fmc +extensions, because they are quasi-prüfer.",0 +"abstract. it is notoriously hard to correctly implement a multiparty +protocol which involves asynchronous/concurrent interactions and the +constraints on states of multiple participants. to assist developers in implementing such protocols, we propose a novel specification language to +specify interactions within multiple object-oriented actors and the sideeffects on heap memory of those actors; a behavioral-type-based analysis +is presented for type checking. our specification language formalizes a +protocol as a global type, which describes the procedure of asynchronous +method calls, the usage of futures, and the heap side-effects with a firstorder logic. to characterize runs of instances of types, we give a modeltheoretic semantics for types and translate them into logical constraints +over traces. we prove protocol adherence: if a program is well-typed +w.r.t. a protocol, then every trace of the program adheres to the protocol, i.e., every trace is a model for the formula of its type.",6 +"abstract +discovery of an accurate causal bayesian network structure from observational +data can be useful in many areas of science. often the discoveries are made under +uncertainty, which can be expressed as probabilities. to guide the use of such +discoveries, including directing further investigation, it is important that those +probabilities be well-calibrated. in this paper, we introduce a novel framework to +derive calibrated probabilities of causal relationships from observational data. the +framework consists of three components: (1) an approximate method for generating +initial probability estimates of the edge types for each pair of variables, (2) the +availability of a relatively small number of the causal relationships in the network +for which the truth status is known, which we call a calibration training set, and +(3) a calibration method for using the approximate probability estimates and the +calibration training set to generate calibrated probabilities for the many remaining +pairs of variables. we also introduce a new calibration method based on a shallow +neural network. our experiments on simulated data support that the proposed +approach improves the calibration of causal edge predictions. the results also +support that the approach often improves the precision and recall of predictions.",2 +"abstract +screening for ultrahigh dimensional features may encounter complicated issues +such as outlying observations, heteroscedasticity or heavy-tailed distribution, multicollinearity and confounding effects. standard correlation-based marginal screening +methods may be a weak solution to these issues. we contribute a novel robust joint +screener to safeguard against outliers and distribution mis-specification for both the +response variable and the covariates, and to account for external variables at the screening step. specifically, we introduce a copula-based partial correlation (cpc) screener. +we show that the empirical process of the estimated cpc converges weakly to a gaussian process and establish the sure screening property for cpc screener under very +mild technical conditions, where we need not require any moment condition, weaker +than existing alternatives in the literature. moreover, our approach allows for a diverging number of conditional variables from the theoretical point of view. extensive +simulation studies and two data applications are included to illustrate our proposal.",10 +"abstract. we use the bass–jiang group for automorphism-induced +hnn-extensions to build a framework for the construction of tractable +groups with pathological outer automorphism groups. we apply +this framework to a strong form of a question of bumagin–wise +on the outer automorphism groups of finitely presented, residually +finite groups.",4 +"abstract +penalized estimation principle is fundamental to high-dimensional problems. in the literature, it has been extensively and successfully applied to various models with only structural +parameters. as a contrast, in this paper, we apply this penalization principle to a linear regression model with a finite-dimensional vector of structural parameters and a high-dimensional +vector of sparse incidental parameters. for the estimators of the structural parameters, we derive their consistency and asymptotic normality, which reveals an oracle property. however, the +penalized estimators for the incidental parameters possess only partial selection consistency but +not consistency. this is an interesting partial consistency phenomenon: the structural parameters are consistently estimated while the incidental ones cannot. for the structural parameters, +also considered is an alternative two-step penalized estimator, which has fewer possible asymptotic distributions and thus is more suitable for statistical inferences. we further extend the +methods and results to the case where the dimension of the structural parameter vector diverges +with but slower than the sample size. a data-driven approach for selecting a penalty regularization parameter is provided. the finite-sample performance of the penalized estimators for the +structural parameters is evaluated by simulations and a real data set is analyzed. +keywords: structural parameters, sparse incidental parameters, penalized estimation, partial consistency, +oracle property, two-step estimation, confidence intervals",10 +"abstract +outlier detection is the identification of points in a dataset that do not conform to the norm. outlier +detection is highly sensitive to the choice of the detection algorithm and the feature subspace used by +the algorithm. extracting domain-relevant insights from outliers needs systematic exploration of these +choices since diverse outlier sets could lead to complementary insights. this challenge is especially acute +in an interactive setting, where the choices must be explored in a time-constrained manner. +in this work, we present remix, the first system to address the problem of outlier detection in an +interactive setting. remix uses a novel mixed integer programming (mip) formulation for automatically +selecting and executing a diverse set of outlier detectors within a time limit. this formulation incorporates +multiple aspects such as (i) an upper limit on the total execution time of detectors (ii) diversity in the +space of algorithms and features, and (iii) meta-learning for evaluating the cost and utility of detectors. +remix provides two distinct ways for the analyst to consume its results: (i) a partitioning of the +detectors explored by remix into perspectives through low-rank non-negative matrix factorization; each +perspective can be easily visualized as an intuitive heatmap of experiments versus outliers, and (ii) an +ensembled set of outliers which combines outlier scores from all detectors. we demonstrate the benefits +of remix through extensive empirical validation on real-world data.",2 +"abstract. interaction with services provided by an execution environment forms part of the behaviours exhibited by instruction sequences +under execution. mechanisms related to the kind of interaction in question have been proposed in the setting of thread algebra. like thread, +service is an abstract behavioural concept. the concept of a functional +unit is similar to the concept of a service, but more concrete. a state +space is inherent in the concept of a functional unit, whereas it is not +inherent in the concept of a service. in this paper, we establish the existence of a universal computable functional unit for natural numbers and +related results. +keywords: functional unit, instruction sequence. +1998 acm computing classification: f.1.1, f.4.1.",6 +"abstract +representations are internal models of the environment that can provide guidance to a +behaving agent, even in the absence of sensory information. it is not clear how representations are developed and whether or not they are necessary or even essential for +intelligent behavior. we argue here that the ability to represent relevant features of the +environment is the expected consequence of an adaptive process, give a formal definition of representation based on information theory, and quantify it with a measure r. +to measure how r changes over time, we evolve two types of networks—an artificial +neural network and a network of hidden markov gates—to solve a categorization task +using a genetic algorithm. we find that the capacity to represent increases during evolutionary adaptation, and that agents form representations of their environment during +their lifetime. this ability allows the agents to act on sensorial inputs in the context +of their acquired representations and enables complex and context-dependent behavior. +we examine which concepts (features of the environment) our networks are representing, how the representations are logically encoded in the networks, and how they form +as an agent behaves to solve a task. we conclude that r should be able to quantify +∗",9 +"abstract +an autonomous computer system (such as a robot) typically +needs to identify, locate, and track persons appearing in its +sight. however, most solutions have their limitations regarding efficiency, practicability, or environmental constraints. in +this paper, we propose an effective and practical system which +combines video and inertial sensors for person identification +(pid). persons who do different activities are easy to identify. +to show the robustness and potential of our system, we propose a walking person identification (wpid) method to identify persons walking at the same time. by comparing features +derived from both video and inertial sensor data, we can associate sensors in smartphones with human objects in videos. +results show that the correctly identified rate of our wpid +method can up to 76% in 2 seconds. +index terms— artificial intelligence, computer vision, +gait analysis, inertial sensor, walking person identification. +1. introduction +human navigates the world through five senses, including +taste, touch, smell, hearing, and sight. we sometimes rely +on one sense while sometimes on multiple senses. for computer systems, the optical sensor is perhaps the most essential +sensor which captures information like human eyes. cameras are widely used for public safety and services in hospitals, shopping malls, streets, etc. on the other hand, booming +use of other sensors is seen in many iot applications due to +the advances in wireless communications and mems. in this +work, we like to raise one fundamental question: how can +we improve the perceptivity of computer systems by integrating multiple sensors? more specifically, we are interested in +fusing video and inertial sensor data to achieve person identification (pid), as is shown in fig. 1(b). +efficient pid is the first step toward surveillance, home +security, person tracking, no checkout supermarkets, and +human-robot conversation. traditional pid technologies are +usually based on capturing biological features like face, voice, +tooth, fingerprint, dna, and iris [1–3]. however, these techniques require intimate information of users, cumbersome +registration, training process, and user cooperation. also,",1 +"abstract—the increasing penetration of renewable energy in +recent years has led to more uncertainties in power systems. +in order to maintain system reliability and security, electricity +market operators need to keep certain reserves in the securityconstrained economic dispatch (sced) problems. a new concept, deliverable generation ramping reserve, is proposed in +this paper. the prices of generation ramping reserves and +generation capacity reserves are derived in the affine adjustable +robust optimization framework. with the help of these prices, +the valuable reserves can be identified among the available +reserves. these prices provide crucial information on the values +of reserve resources, which are critical for the long-term flexibility +investment. the market equilibrium based on these prices is +analyzed. simulations on a 3-bus system and the ieee 118-bus +system are performed to illustrate the concept of ramping reserve +price and capacity reserve price. the impacts of the reserve credit +on market participants are discussed. +index terms—ramping reserve, capacity reserve, marginal +price, uncertainties, affinely adjustable robust optimization",3 +"abstract +in this short note we give a formula for the factorization number +f2 (g) of a finite rank 3 abelian p-group g. this extends a result in +our previous work [9].",4 +"abstract +in this paper, we determine a class of deep holes for gabidulin codes +with both rank metric and hamming metric. moreover, we give a necessary +and sufficient condition for deciding whether a word is not a deep hole for +gabidulin codes, by which we study the error distance of two special classes +of words to certain gabidulin codes. +keywords: gabidulin codes, rank metric, deep holes, covering radius, error rank +distance",7 +"abstract the present paper treats the problem of finding the asymptotic bounds for the globally optimal +locations of orthogonal stiffeners minimizing the compliance of a rectangular plate in elastostatic bending. +the essence of the paper is the utilization of a method of analysis of orthogonally stiffened rectangular plates +first presented by mazurkiewicz in 1962, and obtained herein in a closed form for several special cases in the +approximation of stiffeners having zero torsional rigidity. asymptotic expansions of the expressions for the +deflection field of a stiffened plate are used to derive limit-case globally optimal stiffening layouts for highly +flexible and highly rigid stiffeners. a central result obtained in this work is an analytical proof of the fact that +an array of flexible enough orthogonal stiffeners of any number, stiffening a simply-supported rectangular +plate subjected to any lateral loading, is best to be put in the form of exactly two orthogonal stiffeners, one in +each direction. +keywords elastic plate bending; orthogonal stiffeners; fredholm's 2nd kind integral equation; +asymptotic analysis; globally optimal positions +1",5 +"abstract +we introduce a pair of tools, rasa nlu and rasa core, which are open source +python libraries for building conversational software. their purpose is to make +machine-learning based dialogue management and language understanding accessible to non-specialist software developers. in terms of design philosophy, we aim +for ease of use, and bootstrapping from minimal (or no) initial training data. both +packages are extensively documented and ship with a comprehensive suite of tests. +the code is available at https://github.com/rasahq/",2 +"abstract—the blockchain technology has achieved tremendous success in open (permissionless) decentralized consensus +by employing proof-of-work (pow) or its variants, whereby +unauthorized nodes cannot gain disproportionate impact on +consensus beyond their computational power. however, powbased systems incur a high delay and low throughput, making +them ineffective in dealing with real-time applications. on the +other hand, byzantine fault-tolerant (bft) consensus algorithms with better delay and throughput performance have +been employed in closed (permissioned) settings to avoid sybil +attacks. in this paper, we present sybil-proof wireless network +coordinate based byzantine consensus (senate), which is +based on the conventional bft consensus framework yet works +in open systems of wireless devices where faulty nodes may +launch sybil attacks. as in a senate in the legislature where the +quota of senators per state (district) is a constant irrespective +with the population of the state, “senators” in senate are +selected from participating distributed nodes based on their +wireless network coordinates (wnc) with a fixed number of +nodes per district in the wnc space. elected senators then +participate in the subsequent consensus reaching process and +broadcast the result. thereby, senate is proof against sybil +attacks since pseudonyms of a faulty node are likely to be +adjacent in the wnc space and hence fail to be elected. +index terms—byzantine fault-tolerant consensus, sybil attack, +wireless network, permissionless blockchain, distance geometry",7 +"abstract—time-division duplex (tdd) based massive mimo +systems rely on the reciprocity of the wireless propagation channels when calculating the downlink precoders based on uplink +pilots. however, the effective uplink and downlink channels +incorporating the analog radio front-ends of the base station +(bs) and user equipments (ues) exhibit non-reciprocity due to +non-identical behavior of the individual transmit and receive +chains. when downlink precoder is not aware of such channel +non-reciprocity (nrc), system performance can be significantly +degraded due to nrc induced interference terms. in this work, +we consider a general tdd-based massive mimo system where +frequency-response mismatches at both the bs and ues, as well +as the mutual coupling mismatch at the bs large-array system +all coexist and induce channel nrc. based on the nrc-impaired +signal models, we first propose a novel iterative estimation +method for acquiring both the bs and ue side nrc matrices and +then also propose a novel nrc-aware downlink precoder design +which utilizes the obtained estimates. furthermore, an efficient +pilot signaling scheme between the bs and ues is introduced +in order to facilitate executing the proposed estimation method +and the nrc-aware precoding technique in practical systems. +comprehensive numerical results indicate substantially improved +spectral efficiency performance when the proposed nrc estimation and nrc-aware precoding methods are adopted, compared +to the existing state-of-the-art methods. +index terms—beamforming, channel non-reciprocity, channel +state information, frequency-response mismatch, linear precoding, massive mimo, mutual coupling, time division duplexing +(tdd).",7 +"abstract +deep generative models are reported to be useful in broad applications including image generation. repeated inference between data space and latent space +in these models can denoise cluttered images and improve the quality of inferred +results. however, previous studies only qualitatively evaluated image outputs in +data space, and the mechanism behind the inference has not been investigated. +the purpose of the current study is to numerically analyze changes in activity patterns of neurons in the latent space of a deep generative model called a “variational +auto-encoder” (vae). what kinds of inference dynamics the vae demonstrates +when noise is added to the input data are identified. the vae embeds a dataset +with clear cluster structures in the latent space and the center of each cluster of +multiple correlated data points (memories) is referred as the concept. our study +demonstrated that transient dynamics of inference first approaches a concept, and +then moves close to a memory. moreover, the vae revealed that the inference +dynamics approaches a more abstract concept to the extent that the uncertainty +of input data increases due to noise. it was demonstrated that by increasing the +number of the latent variables, the trend of the inference dynamics to approach +a concept can be enhanced, and the generalization ability of the vae can be improved. +∗",9 +"abstract +a piecewise-deterministic markov process is a stochastic process whose behavior is governed by an +ordinary differential equation punctuated by random jumps occurring at random times. we focus on +the nonparametric estimation problem of the jump rate for such a stochastic model observed within a +long time interval under an ergodicity condition. we introduce an uncountable class (indexed by the +deterministic flow) of recursive kernel estimates of the jump rate and we establish their strong pointwise +consistency as well as their asymptotic normality. we propose to choose among this class the estimator +with the minimal variance, which is unfortunately unknown and thus remains to be estimated. we also +discuss the choice of the bandwidth parameters by cross-validation methods. +keywords: cross-validation · jump rate · kernel method · nonparametric estimation · piecewisedeterministic markov process +mathematics subject classification (2010): 62m05 · 62g20 · 60j25",10 +"abstract. large-scale collaborative analysis of brain imaging data, in psychiatry and neurology, offers a new source of statistical power to discover features +that boost accuracy in disease classification, differential diagnosis, and outcome +prediction. however, due to data privacy regulations or limited accessibility to +large datasets across the world, it is challenging to efficiently integrate distributed information. here we propose a novel classification framework through +multi-site weighted lasso: each site performs an iterative weighted lasso +for feature selection separately. within each iteration, the classification result +and the selected features are collected to update the weighting parameters for +each feature. this new weight is used to guide the lasso process at the next +iteration. only the features that help to improve the classification accuracy are +preserved. in tests on data from five sites (299 patients with major depressive +disorder (mdd) and 258 normal controls), our method boosted classification +accuracy for mdd by 4.9% on average. this result shows the potential of the +proposed new strategy as an effective and practical collaborative platform for +machine learning on large scale distributed imaging and biobank data.",5 +"abstract—autonomous aerial robots provide new possibilities +to study the habitats and behaviors of endangered species through +the efficient gathering of location information at temporal and +spatial granularities not possible with traditional manual survey +methods. we present a novel autonomous aerial vehicle system to +track and localize multiple radio-tagged animals. the simplicity +of measuring the received signal strength indicator (rssi) values +of very high frequency (vhf) radio-collars commonly used +in the field is exploited to realize a low cost and lightweight +tracking platform suitable for integration with unmanned aerial +vehicles (uavs). due to uncertainty and the nonlinearity of the +system based on rssi measurements, our tracking and planning +approaches integrate a particle filter for tracking and localizing; +a partially observable markov decision process (pomdp) for +dynamic path planning. this approach allows autonomous navigation of a uav in a direction of maximum information gain +to locate multiple mobile animals and reduce exploration time; +and, consequently, conserve on-board battery power. we also +employ the concept of a search termination criteria to maximize +the number of located animals within power constraints of the +aerial system. we validated our real-time and online approach +through both extensive simulations and field experiments with +two mobile vhf radio-tags.",3 +"abstract +model distillation compresses a trained machine learning model, such as +a neural network, into a smaller alternative such that it could be easily +deployed in a resource limited setting. unfortunately, this requires engineering two architectures: a student architecture smaller than the first +teacher architecture but trained to emulate it. in this paper, we present a +distillation strategy that produces a student architecture that is a simple +transformation of the teacher architecture. recent model distillation methods allow us to preserve most of the performance of the trained model +after replacing convolutional blocks with a cheap alternative. in addition, +distillation by attention transfer provides student network performance +that is better than training that student architecture directly on data.",1 +"abstract +protocells are supposed to have played a key role in the self-organizing +processes leading to the emergence of life. existing models either (i) describe protocell architecture and dynamics, given the existence of sets of +collectively self-replicating molecules for granted, or (ii) describe the emergence of the aforementioned sets from an ensemble of random molecules +in a simple experimental setting (e.g. a closed system or a steady-state +flow reactor) that does not properly describe a protocell. in this paper we +present a model that goes beyond these limitations by describing the dynamics of sets of replicating molecules within a lipid vesicle. we adopt the +simplest possible protocell architecture, by considering a semi-permeable +membrane that selects the molecular types that are allowed to enter or +exit the protocell and by assuming that the reactions take place in the +aqueous phase in the internal compartment. as a first approximation, +we ignore the protocell growth and division dynamics. the behavior of +catalytic reaction networks is then simulated by means of a stochastic +model that accounts for the creation and the extinction of species and +reactions. while this is not yet an exhaustive protocell model, it already +provides clues regarding some processes that are relevant for understanding the conditions that can enable a population of protocells to undergo +evolution and selection.",5 +"abstract +the problem of content delivery in caching networks is investigated for scenarios where multiple +users request identical files. redundant user demands are likely when the file popularity distribution +is highly non-uniform or the user demands are positively correlated. an adaptive method is proposed +for the delivery of redundant demands in caching networks. based on the redundancy pattern in the +current demand vector, the proposed method decides between the transmission of uncoded messages +or the coded messages of [1] for delivery. moreover, a lower bound on the delivery rate of redundant +requests is derived based on a cutset bound argument. the performance of the adaptive method is +investigated through numerical examples of the delivery rate of several specific demand vectors as well +as the average delivery rate of a caching network with correlated requests. the adaptive method is +shown to considerably reduce the gap between the non-adaptive delivery rate and the lower bound. in +some specific cases, using the adaptive method, this gap shrinks by almost 50% for the average rate.",7 +"abstract—in this note we deal with a new observer for +nonlinear systems of dimension n in canonical observability form. +we follow the standard high-gain paradigm, but instead of having +an observer of dimension n with a gain that grows up to power +n, we design an observer of dimension 2n − 2 with a gain that +grows up only to power 2.",3 +"abstract. the control problem of a linear discrete-time dynamical system +over a multi-hop network is explored. the network is assumed to be subject to +packet drops by malicious and nonmalicious nodes as well as random and malicious data corruption issues. we utilize asymptotic tail-probability bounds of +transmission failure ratios to characterize the links and paths of a network as +well as the network itself. this probabilistic characterization allows us to take +into account multiple failures that depend on each other, and coordinated malicious attacks on the network. we obtain a sufficient condition for the stability +of the networked control system by utilizing our probabilistic approach. we +then demonstrate the efficacy of our results in different scenarios concerning +transmission failures on a multi-hop network.",3 +"abstract +in this article, we advance divide-and-conquer strategies for solving the community detection problem in networks. we propose two algorithms which perform clustering on a number of small subgraphs +and finally patches the results into a single clustering. the main advantage of these algorithms is that +they bring down significantly the computational cost of traditional algorithms, including spectral clustering, semi-definite programs, modularity based methods, likelihood based methods etc., without losing +on accuracy and even improving accuracy at times. these algorithms are also, by nature, parallelizable. +thus, exploiting the facts that most traditional algorithms are accurate and the corresponding optimization problems are much simpler in small problems, our divide-and-conquer methods provide an omnibus +recipe for scaling traditional algorithms up to large networks. we prove consistency of these algorithms +under various subgraph selection procedures and perform extensive simulations and real-data analysis to +understand the advantages of the divide-and-conquer approach in various settings.",10 +"abstract +the main aim of this paper is to prove r-triviality for simple, simply connected +78 or e 78 , defined over a field k of arbitrary +algebraic groups with tits index e8,2 +7,1 +characteristic. let g be such a group. we prove that there exists a quadratic +extension k of k such that g is r-trivial over k, i.e., for any extension f of +k, g(f )/r = {1}, where g(f )/r denotes the group of r-equivalence classes in +g(f ), in the sense of manin (see [23]). as a consequence, it follows that the +variety g is retract k-rational and that the kneser-tits conjecture holds for these +groups over k. moreover, g(l) is projectively simple as an abstract group for +any field extension l of k. in their monograph ([51]) j. tits and richard weiss +conjectured that for an albert division algebra a over a field k, its structure group +str(a) is generated by scalar homotheties and its u -operators. this is known to +78 . we +be equivalent to the kneser-tits conjecture for groups with tits index e8,2 +settle this conjucture for albert division algebras which are first constructions, in +affirmative. these results are obtained as corollaries to the main result, which +shows that if a is an albert division algebra which is a first construction and γ +its structure group, i.e., the algebraic group of the norm similarities of a, then +γ(f )/r = {1} for any field extension f of k, i.e., γ is r-trivial. +keywords: exceptional groups, algeraic groups, albert algebras, structure group, kneser-tits +conjecture",4 +"abstract +we consider a simple model of unreliable or crowdsourced data where there is an underlying set +of n binary variables, each “evaluator” contributes a (possibly unreliable or adversarial) estimate of the +values of some subset of r of the variables, and the learner is given the true value of a constant number +of variables. we show that, provided an α-fraction of the evaluators are “good” (either correct, or with +independent noise rate p < 1/2), then the true values of a (1 − ǫ) fraction of the n underlying variables +can be deduced as long as α > 1/(2 − 2p)r . for example, if each “good” worker evaluates a random +set of 10 items and there is no noise in their responses, then accurate recovery is possible provided the +fraction of good evaluators is larger than 1/1024. this result is optimal in that if α ≤ 1/(2 − 2p)r , the +large dataset can contain no information. this setting can be viewed as an instance of the semi-verified +learning model introduced in [3], which explores the tradeoff between the number of items evaluated by +each worker and the fraction of “good” evaluators. our results require the number of evaluators to be +extremely large, > nr , although our algorithm runs in linear time, or,ǫ (n), given query access to the +large dataset of evaluations. this setting and results can also be viewed as examining a general class of +semi-adversarial csps with a planted assignment. +this extreme parameter regime, where the fraction of reliable data is small (inverse exponential in +the amount of data provided by each source), is relevant to a number of practical settings. for example, +settings where one has a large dataset of customer preferences, with each customer specifying preferences for a small (constant) number of items, and the goal is to ascertain the preferences of a specific +demographic of interest. our results show that this large dataset (which lacks demographic information) +can be leveraged together with the preferences of the demographic of interest for a constant number +of randomly selected items, to recover an accurate estimate of the entire set of preferences, even if the +fraction of the original dataset contributed by the demographic of interest is inverse exponential in the +number of preferences supplied by each customer. in this sense, our results can be viewed as a “data +prism” allowing one to extract the behavior of specific cohorts from a large, mixed, dataset.",7 +"abstract—facial beauty prediction (fbp) is a significant visual +recognition problem to make assessment of facial attractiveness +that is consistent to human perception. to tackle this problem, various data-driven models, especially state-of-the-art deep +learning techniques, were introduced, and benchmark dataset +become one of the essential elements to achieve fbp. previous +works have formulated the recognition of facial beauty as a +specific supervised learning problem of classification, regression +or ranking, which indicates that fbp is intrinsically a computation problem with multiple paradigms. however, most of +fbp benchmark datasets were built under specific computation +constrains, which limits the performance and flexibility of the +computational model trained on the dataset. in this paper, we +argue that fbp is a multi-paradigm computation problem, and +propose a new diverse benchmark dataset, called scut-fbp5500, +to achieve multi-paradigm facial beauty prediction. the scutfbp5500 dataset has totally 5500 frontal faces with diverse +properties (male/female, asian/caucasian, ages) and diverse labels (face landmarks, beauty scores within [1, 5], beauty score +distribution), which allows different computational models with +different fbp paradigms, such as appearance-based/shape-based +facial beauty classification/regression model for male/female of +asian/caucasian. we evaluated the scut-fbp5500 dataset for +fbp using different combinations of feature and predictor, +and various deep learning methods. the results indicates the +improvement of fbp and the potential applications based on the +scut-fbp5500.",1 +"abstract +in general the different links of a broadcast channel may experience different fading dynamics +and, potentially, unequal or hybrid channel state information (csi) conditions. the faster the fading +and the shorter the fading block length, the more often the link needs to be trained and estimated at +the receiver, and the more likely that csi is stale or unavailable at the transmitter. disparity of link +fading dynamics in the presence of csi limitations can be modeled by a multi-user broadcast channel +with both non-identical link fading block lengths as well as dissimilar link csir/csit conditions. +this paper investigates a miso broadcast channel where some receivers experience longer coherence +intervals (static receivers) and have csir, while some other receivers experience shorter coherence +intervals (dynamic receivers) and do not enjoy free csir. we consider a variety of csit conditions +for the above mentioned model, including no csit, delayed csit, or hybrid csit. to investigate the +degrees of freedom region, we employ interference alignment and beamforming along with a product +superposition that allows simultaneous but non-contaminating transmission of pilots and data to different +receivers. outer bounds employ the extremal entropy inequality as well as a bounding of the performance +of a discrete memoryless multiuser multilevel broadcast channel. for several cases, inner and outer +bounds are established that either partially meet, or the gap diminishes with increasing coherence times.",7 +"abstract +in the last fifteen years, the high performance computing (hpc) community has claimed for parallel programming environments that reconciles generality, higher level of abstraction, portability, and efficiency for +distributed-memory parallel computing platforms. the hash component +model appears as an alternative for addressing hpc community claims +for fitting these requirements. this paper presents foundations that will +enable a parallel programming environment based on the hash model to +address the problems of “debugging”, performance evaluation and verification of formal properties of parallel program by means of a powerful, +simple, and widely adopted formalism: petri nets.",6 +"abstract +distributed actor languages are an effective means of constructing scalable reliable systems, and the erlang +programming language has a well-established and influential model. while the erlang model conceptually +provides reliable scalability, it has some inherent scalability limits and these force developers to depart from +the model at scale. this article establishes the scalability limits of erlang systems, and reports the work of +the eu release project to improve the scalability and understandability of the erlang reliable distributed +actor model. +we systematically study the scalability limits of erlang, and then address the issues at the virtual machine, +language and tool levels. more specifically: (1) we have evolved the erlang virtual machine so that it can +work effectively in large scale single-host multicore and numa architectures. we have made important +changes and architectural improvements to the widely used erlang/otp release. (2) we have designed and +implemented scalable distributed (sd) erlang libraries to address language-level scalability issues, and +provided and validated a set of semantics for the new language constructs. (3) to make large erlang systems +easier to deploy, monitor, and debug we have developed and made open source releases of five complementary +tools, some specific to sd erlang. +throughout the article we use two case studies to investigate the capabilities of our new technologies and +tools: a distributed hash table based orbit calculation and ant colony optimisation (aco). chaos monkey +experiments show that two versions of aco survive random process failure and hence that sd erlang +preserves the erlang reliability model. while we report measurements on a range of numa and cluster +architectures, the key scalability experiments are conducted on the athos cluster with 256 hosts (6144 cores). +even for programs with no global recovery data to maintain, sd erlang partitions the network to reduce +network traffic and hence improves performance of the orbit and aco benchmarks above 80 hosts. aco +measurements show that maintaining global recovery data dramatically limits scalability; however scalability +is recovered by partitioning the recovery data. we exceed the established scalability limits of distributed +erlang, and do not reach the limits of sd erlang for these benchmarks at this scale (256 hosts, 6144 cores).",6 +"abstract. pseudo-code is a great way of communicating ideas quickly and clearly while +giving readers no chance to understand the subtle implementation details (particularly the custom +toolchains and manual interventions) that actually make it work. +3. short and sweet. any limitations of your methods or proofs will be obvious to the careful reader, +so there is no need to waste space on making them explicit2 . however much work it takes colleagues +to fill in the gaps, you will still get the credit if you just say you have amazing experiments or +proofs (with a hat-tip to pierre de fermat: “cuius rei demonstrationem mirabilem sane detexi +hanc marginis exiguitas non caperet.”). +4. the deficit model. you’re the expert in the domain, only you can define what algorithms and +data to run experiments with. in the unhappy circumstance that your methods do not do well on +1",5 +"abstract. let g be the free two step nilpotent lie group on three generators and let +l be a sublaplacian on it. we compute the spectral resolution of l and prove that the +operators arising from this decomposition enjoy a tomas-stein type estimate.",4 +abstract,6 +"abstract. let k be a field of characteristic 0 and consider exterior algebras of finite dimensional k-vector spaces. in this short paper we exhibit +principal quadric ideals in a family whose castelnuovo-mumford regularity is +unbounded. this negatively answers the analogue of stillman’s question for +exterior algebras posed by i. peeva. we show that these examples are dual to +modules over polynomial rings that yield counterexamples to a conjecture of +j. herzog on the betti numbers in the linear strand of syzygy modules.",0 +"abstract +recruitment market analysis provides valuable understanding of industry-specific economic growth and plays an important role for both employers and job seekers. with the +rapid development of online recruitment services, massive +recruitment data have been accumulated and enable a new +paradigm for recruitment market analysis. however, traditional methods for recruitment market analysis largely rely +on the knowledge of domain experts and classic statistical +models, which are usually too general to model large-scale +dynamic recruitment data, and have difficulties to capture +the fine-grained market trends. to this end, in this paper, +we propose a new research paradigm for recruitment market +analysis by leveraging unsupervised learning techniques for +automatically discovering recruitment market trends based +on large-scale recruitment data. specifically, we develop +a novel sequential latent variable model, named mtlvm, +which is designed for capturing the sequential dependencies +of corporate recruitment states and is able to automatically +learn the latent recruitment topics within a bayesian generative framework. in particular, to capture the variability of recruitment topics over time, we design hierarchical +dirichlet processes for mtlvm. these processes allow to +dynamically generate the evolving recruitment topics. finally, we implement a prototype system to empirically evaluate our approach based on real-world recruitment data in +china. indeed, by visualizing the results from mtlvm, we +can successfully reveal many interesting findings, such as the +popularity of lbs related jobs reached the peak in the 2nd +half of 2014, and decreased in 2015.",2 +"abstract +this work studies the strong duality of non-convex matrix factorization problems: we show that +under certain dual conditions, these problems and its dual have the same optimum. this has been well +understood for convex optimization, but little was known for non-convex problems. we propose a novel +analytical framework and show that under certain dual conditions, the optimal solution of the matrix +factorization program is the same as its bi-dual and thus the global optimality of the non-convex program +can be achieved by solving its bi-dual which is convex. these dual conditions are satisfied by a wide +class of matrix factorization problems, although matrix factorization problems are hard to solve in full +generality. this analytical framework may be of independent interest to non-convex optimization more +broadly. +we apply our framework to two prototypical matrix factorization problems: matrix completion and +robust principal component analysis (pca). these are examples of efficiently recovering a hidden matrix +given limited reliable observations of it. our framework shows that exact recoverability and strong duality +hold with nearly-optimal sample complexity guarantees for matrix completion and robust pca.",8 +"abstract +the heat kernel is a type of graph diffusion that, like the +much-used personalized pagerank diffusion, is useful in identifying a community nearby a starting seed node. we present +the first deterministic, local algorithm to compute this diffusion and use that algorithm to study the communities that it +produces. our algorithm is formally a relaxation method for +solving a linear system to estimate the matrix exponential +in a degree-weighted norm. we prove that this algorithm +stays localized in a large graph and has a worst-case constant runtime that depends only on the parameters of the +diffusion, not the size of the graph. on large graphs, our experiments indicate that the communities produced by this +method have better conductance than those produced by +pagerank, although they take slightly longer to compute. +on a real-world community identification task, the heat kernel communities perform better than those from the pagerank diffusion.",8 +"abstract +event sequence, asynchronously generated with random +timestamp, is ubiquitous among applications. the precise and +arbitrary timestamp can carry important clues about the underlying dynamics, and has lent the event data fundamentally +different from the time-series whereby series is indexed with +fixed and equal time interval. one expressive mathematical +tool for modeling event is point process. the intensity functions of many point processes involve two components: the +background and the effect by the history. due to its inherent spontaneousness, the background can be treated as a time +series while the other need to handle the history events. in +this paper, we model the background by a recurrent neural +network (rnn) with its units aligned with time series indexes while the history effect is modeled by another rnn +whose units are aligned with asynchronous events to capture the long-range dynamics. the whole model with event +type and timestamp prediction output layers can be trained +end-to-end. our approach takes an rnn perspective to point +process, and models its background and history effect. for +utility, our method allows a black-box treatment for modeling the intensity which is often a pre-defined parametric form +in point processes. meanwhile end-to-end training opens the +venue for reusing existing rich techniques in deep network for +point process modeling. we apply our model to the predictive +maintenance problem using a log dataset by more than 1000 +atms from a global bank headquartered in north america.",2 +"abstract—in this paper, we present the role playing learning +(rpl) scheme for a mobile robot to navigate socially with its +human companion in populated environments. neural networks +(nn) are constructed to parameterize a stochastic policy that +directly maps sensory data collected by the robot to its velocity +outputs, while respecting a set of social norms. an efficient simulative learning environment is built with maps and pedestrians +trajectories collected from a number of real-world crowd data +sets. in each learning iteration, a robot equipped with the nn +policy is created virtually in the learning environment to play +itself as a companied pedestrian and navigate towards a goal in +a socially concomitant manner. thus, we call this process role +playing learning, which is formulated under a reinforcement +learning (rl) framework. the nn policy is optimized end-toend using trust region policy optimization (trpo), with consideration of the imperfectness of robot’s sensor measurements. +simulative and experimental results are provided to demonstrate +the efficacy and superiority of our method.",2 +"abstract: +this paper continues the research started in lepski and willer (2016). in the framework of +the convolution structure density model on rd , we address the problem of adaptive minimax +estimation with lp –loss over the scale of anisotropic nikol’skii classes. we fully characterize +the behavior of the minimax risk for different relationships between regularity parameters and +norm indexes in the definitions of the functional class and of the risk. in particular, we show +that the boundedness of the function to be estimated leads to an essential improvement of +the asymptotic of the minimax risk. we prove that the selection rule proposed in part i leads +to the construction of an optimally or nearly optimally (up to logarithmic factor) adaptive +estimator. +ams 2000 subject classifications: 62g05, 62g20. +keywords and phrases: deconvolution model, density estimation, oracle inequality, adaptive estimation, kernel estimators, lp –risk, anisotropic nikol’skii class.",10 +"abstract: cumulative link models have been widely used for ordered categorical +responses. uniform allocation of experimental units is commonly used in practice, +but often suffers from a lack of efficiency. we consider d-optimal designs with +ordered categorical responses and cumulative link models. for a predetermined set +of design points, we derive the necessary and sufficient conditions for an allocation +to be locally d-optimal and develop efficient algorithms for obtaining approximate +and exact designs. we prove that the number of support points in a minimally +supported design only depends on the number of predictors, which can be much +less than the number of parameters in the model. we show that a d-optimal +minimally supported allocation in this case is usually not uniform on its support +points. in addition, we provide ew d-optimal designs as a highly efficient surrogate +to bayesian d-optimal designs. both of them can be much more robust than +uniform designs. +key words and phrases: approximate design, exact design, multinomial response, +cumulative link model, minimally supported design, ordinal data.",10 +abstract,2 +"abstract. we define and study generalized nil-coxeter algebras associated +to coxeter groups. motivated by a question of coxeter (1957), we construct the first examples of such finite-dimensional algebras that are not the +‘usual’ nil-coxeter algebras: a novel 2-parameter type a family that we call +n ca (n, d). we explore several combinatorial properties of n ca (n, d), including its coxeter word basis, length function, and hilbert–poincaré series, +and show that the corresponding generalized coxeter group is not a flat deformation of n ca (n, d). these algebras yield symmetric semigroup module +categories that are necessarily not monoidal; we write down their tannaka– +krein duality. +further motivated by the broué–malle–rouquier (bmr) freeness conjecture [j. reine angew. math. 1998], we define generalized nil-coxeter algebras n cw over all discrete real or complex reflection groups w , finite or +infinite. we provide a complete classification of all such algebras that are +finite-dimensional. remarkably, these turn out to be either the usual nilcoxeter algebras, or the algebras n ca (n, d). this proves as a special case – +and strengthens – the lack of equidimensional nil-coxeter analogues for finite +complex reflection groups. in particular, generic hecke algebras are not flat +deformations of n cw for w complex.",4 +"abstract +reinforcement learning (rl) is a promising +approach to solve dialogue policy optimisation. traditional rl algorithms, however, fail +to scale to large domains due to the curse +of dimensionality. we propose a novel dialogue management architecture, based on feudal rl, which decomposes the decision into +two steps; a first step where a master policy +selects a subset of primitive actions, and a second step where a primitive action is chosen +from the selected subset. the structural information included in the domain ontology is +used to abstract the dialogue state space, taking the decisions at each step using different +parts of the abstracted state. this, combined +with an information sharing mechanism between slots, increases the scalability to large +domains. we show that an implementation +of this approach, based on deep-q networks, +significantly outperforms previous state of the +art in several dialogue domains and environments, without the need of any additional reward signal.",2 +"abstract. learning the model parameters of a multi-object dynamical system from partial and +perturbed observations is a challenging task. despite recent numerical advancements in learning these +parameters, theoretical guarantees are extremely scarce. in this article, we study the identifiability +of these parameters and the consistency of the corresponding maximum likelihood estimate (mle) +under assumptions on the different components of the underlying multi-object system. in order to +understand the impact of the various sources of observation noise on the ability to learn the model +parameters, we study the asymptotic variance of the mle through the associated fisher information +matrix. for example, we show that specific aspects of the multi-target tracking (mtt) problem such +as detection failures and unknown data association lead to a loss of information which is quantified +in special cases of interest. +key words. identifiability, consistency, fisher information +ams subject classifications. 62f12, 62b10",10 +"abstract—massive multiple-input multiple-output (mimo) systems, which utilize a large number of antennas at the base +station, are expected to enhance network throughput by enabling +improved multiuser mimo techniques. to deploy many antennas +in reasonable form factors, base stations are expected to employ +antenna arrays in both horizontal and vertical dimensions, which +is known as full-dimension (fd) mimo. the most popular +two-dimensional array is the uniform planar array (upa), +where antennas are placed in a grid pattern. to exploit the +full benefit of massive mimo in frequency division duplexing +(fdd), the downlink channel state information (csi) should be +estimated, quantized, and fed back from the receiver to the +transmitter. however, it is difficult to accurately quantize the +channel in a computationally efficient manner due to the high +dimensionality of the massive mimo channel. in this paper, we +develop both narrowband and wideband csi quantizers for fdmimo taking the properties of realistic channels and the upa +into consideration. to improve quantization quality, we focus +on not only quantizing dominant radio paths in the channel, +but also combining the quantized beams. we also develop a +hierarchical beam search approach, which scans both vertical +and horizontal domains jointly with moderate computational +complexity. numerical simulations verify that the performance +of the proposed quantizers is better than that of previous csi +quantization techniques.",7 +"abstract +we initiate a thorough study of distributed property testing – producing algorithms for the +approximation problems of property testing in the congest model. in particular, for the +so-called dense graph testing model we emulate sequential tests for nearly all graph properties +having 1-sided tests, while in the general and sparse models we obtain faster tests for trianglefreeness, cycle-freeness and bipartiteness, respectively. in addition, we show a logarithmic lower +bound for testing bipartiteness and cycle-freeness, which holds even in the stronger local +model. +in most cases, aided by parallelism, the distributed algorithms have a much shorter running +time as compared to their counterparts from the sequential querying model of traditional property testing. the simplest property testing algorithms allow a relatively smooth transitioning +to the distributed model. for the more complex tasks we develop new machinery that may be +of independent interest.",8 +"abstract. standard higher-order contract monitoring breaks tail recursion and leads to space leaks that can change a program’s asymptotic complexity; space-efficiency restores tail recursion and bounds the +amount of space used by contracts. space-efficient contract monitoring +for contracts enforcing simple type disciplines (a/k/a gradual typing) is +well studied. prior work establishes a space-efficient semantics for manifest contracts without dependency [11]; we adapt that work to a latent +calculus with dependency. we guarantee space efficiency when no dependency is used; we cannot generally guarantee space efficiency when +dependency is used, but instead offer a framework for making such programs space efficient on a case-by-case basis.",6 +"abstract +recent work on neural network pruning indicates that, at training time, neural +networks need to be significantly larger in size than is necessary to represent the +eventual functions that they learn. this paper articulates a new hypothesis to explain +this phenomenon. this conjecture, which we term the lottery ticket hypothesis, +proposes that successful training depends on lucky random initialization of a +smaller subcomponent of the network. larger networks have more of these “lottery +tickets,” meaning they are more likely to luck out with a subcomponent initialized +in a configuration amenable to successful optimization. +this paper conducts a series of experiments with xor and mnist that support +the lottery ticket hypothesis. in particular, we identify these fortuitously-initialized +subcomponents by pruning low-magnitude weights from trained networks. we then +demonstrate that these subcomponents can be successfully retrained in isolation +so long as the subnetworks are given the same initializations as they had at the +beginning of the training process. initialized as such, these small networks reliably +converge successfully, often faster than the original network at the same level +of accuracy. however, when these subcomponents are randomly reinitialized or +rearranged, they perform worse than the original network. in other words, large +networks that train successfully contain small subnetworks with initializations +conducive to optimization. +the lottery ticket hypothesis and its connection to pruning are a step toward +developing architectures, initializations, and training strategies that make it possible +to solve the same problems with much smaller networks.",2 +"abstract +this paper continues the functional approach to the p-versus-np problem, begun in [2]. here +we focus on the monoid rmp2 of right-ideal morphisms of the free monoid, that have polynomial +input balance and polynomial time-complexity. we construct a machine model for the functions +in rmp2 , and evaluation functions. we prove that rmp2 is not finitely generated, and use this to +show separation results for time-complexity.",4 +abstract,9 +"abstract +we address the fundamental network design problem of constructing approximate minimum +spanners. our contributions are for the distributed setting, providing both algorithmic and +hardness results. +our main hardness result shows that √an α-approximation for the minimum directed kspanner +√ √problem for k ≥ 5 requires ω(n/ α log n) rounds using deterministic algorithms or +ω( n/ α log n) rounds using randomized ones, in the congest model of distributed computing. +combined with the constant-round o(n )-approximation algorithm in the local model of +[barenboim, elkin and gavoille, 2016], as well as a polylog-round (1 + )-approximation algorithm +in the local model that we show here, our lower bounds for the congest model imply a strict +separation between the local and congest models. notably, to the best of our knowledge, +this is the first separation between these models for a local approximation problem. +similarly, a separation between the directed and undirected cases is implied. we also prove +that the minimum weighted k-spanner problem for k ≥ 4 requires a near-linear number of rounds +in the congest model, for directed or undirected graphs. in addition, we show lower bounds +for the minimum weighted 2-spanner problem in the congest and local models. +on the algorithmic side, apart from the aforementioned (1 + )-approximation algorithm +for minimum k-spanners, our main contribution is a new distributed construction of minimum +2-spanners that uses only polynomial local computations. our algorithm has a guaranteed +approximation ratio of o(log(m/n)) for a graph with n vertices and m edges, which matches +the best known ratio for polynomial time sequential algorithms [kortsarz and peleg, 1994], +and is tight if we restrict ourselves to polynomial local computations. an algorithm with this +approximation factor was not previously known for the distributed setting. the number of rounds +required for our algorithm is o(log n log ∆) w.h.p, where ∆ is the maximum degree in the graph. +our approach allows us to extend our algorithm to work also for the directed, weighted, and +client-server variants of the problem. it also provides a congest algorithm for the minimum +dominating set problem, with a guaranteed o(log ∆) approximation ratio.",8 +"abstract. high energy physics (hep) distributed computing infrastructures require +automatic tools to monitor, analyze and react to potential security incidents. these tools +should collect and inspect data such as resource consumption, logs and sequence of system calls +for detecting anomalies that indicate the presence of a malicious agent. they should also be able +to perform automated reactions to attacks without administrator intervention. we describe a +novel framework that accomplishes these requirements, with a proof of concept implementation +for the alice experiment at cern. we show how we achieve a fully virtualized environment +that improves the security by isolating services and jobs without a significant performance +impact. we also describe a collected dataset for machine learning based intrusion prevention +and detection systems on grid computing. this dataset is composed of resource consumption +measurements (such as cpu, ram and network traffic), logfiles from operating system services, +and system call data collected from production jobs running in an alice grid test site and +a big set of malware. this malware was collected from security research sites. based on this +dataset, we will proceed to develop machine learning algorithms able to detect malicious jobs.",2 +"abstract: people can think in auditory, visual and tactile forms of language, so can machines +principally. but is it possible for them to think in radio language? according to a first principle +presented for general intelligence, i.e. the principle of language's relativity, the answer may +give an exceptional solution for robot astronauts to talk with each other in space exploration. +keywords: the principle of language’s relativity; first principle; radio language; space exploration.",2 +"abstract—the success of recent deep convolutional neural networks (cnns) depends on learning hidden representations that can +summarize the important factors of variation behind the data. however, cnns often criticized as being black boxes that lack +interpretability, since they have millions of unexplained model parameters. in this work, we describe network dissection, a method that +interprets networks by providing labels for the units of their deep visual representations. the proposed method quantifies the +interpretability of cnn representations by evaluating the alignment between individual hidden units and a set of visual semantic concepts. +by identifying the best alignments, units are given human interpretable labels across a range of objects, parts, scenes, textures, materials, +and colors. the method reveals that deep representations are more transparent and interpretable than expected: we find that +representations are significantly more interpretable than they would be under a random equivalently powerful basis. we apply the method +to interpret and compare the latent representations of various network architectures trained to solve different supervised and +self-supervised training tasks. we then examine factors affecting the network interpretability such as the number of the training iterations, +regularizations, different initializations, and the network depth and width. finally we show that the interpreted units can be used to provide +explicit explanations of a prediction given by a cnn for an image. our results highlight that interpretability is an important property of +deep neural networks that provides new insights into their hierarchical structure. +index terms—convolutional neural networks, network interpretability, visual recognition, interpretable machine learning.",1 +"abstract +the problem of obtaining dense reconstruction of an object in a natural sequence of images has been long studied in computer vision. classically this problem has been +solved through the application of bundle adjustment (ba). +more recently, excellent results have been attained through +the application of photometric bundle adjustment (pba) +methods – which directly minimize the photometric error +across frames. a fundamental drawback to ba & pba, however, is: (i) their reliance on having to view all points on +the object, and (ii) for the object surface to be well textured. to circumvent these limitations we propose semantic pba which incorporates a 3d object prior, obtained +through deep learning, within the photometric bundle adjustment problem. we demonstrate state of the art performance in comparison to leading methods for object reconstruction across numerous natural sequences.",1 +"abstract +slow running or straggler tasks can significantly reduce computation speed in +distributed computation. recently, coding-theory-inspired approaches have been +applied to mitigate the effect of straggling, through embedding redundancy in +certain linear computational steps of the optimization algorithm, thus completing +the computation without waiting for the stragglers. in this paper, we propose an +alternate approach where we embed the redundancy directly in the data itself, and +allow the computation to proceed completely oblivious to encoding. we propose +several encoding schemes, and demonstrate that popular batch algorithms, such as +gradient descent and l-bfgs, applied in a coding-oblivious manner, deterministically achieve sample path linear convergence to an approximate solution of the +original problem, using an arbitrarily varying subset of the nodes at each iteration. +moreover, this approximation can be controlled by the amount of redundancy +and the number of nodes used in each iteration. we provide experimental results +demonstrating the advantage of the approach over uncoded and data replication +strategies.",7 +"abstract +the supplier selection problem is based on electing the best supplier from a group of prespecified candidates, is identified as a multi criteria decision making (mcdm), is +proportionately significant in terms of qualitative and quantitative attributes. it is a +fundamental issue to achieve a trade-off between such quantifiable and unquantifiable +attributes with an aim to accomplish the best solution to the abovementioned problem. this +article portrays a metaheuristic based optimization model to solve this np-complete problem. +initially the analytic hierarchy process (ahp) is implemented to generate an initial feasible +solution of the problem. thereafter a simulated annealing (sa) algorithm is exploited to +improve the quality of the obtained solution. the taguchi robust design method is exploited +to solve the critical issues on the subject of the parameter selection of the sa technique. in +order to verify the proposed methodology the numerical results are demonstrated based on +tangible industry data.",9 +"abstract +in this paper, we derive a temporal arbitrage policy for storage via reinforcement learning. real-time price +arbitrage is an important source of revenue for storage units, but designing good strategies have proven to be difficult +because of the highly uncertain nature of the prices. instead of current model predictive or dynamic programming +approaches, we use reinforcement learning to design an optimal arbitrage policy. this policy is learned through +repeated charge and discharge actions performed by the storage unit through updating a value matrix. we design a +reward function that does not only reflect the instant profit of charge/discharge decisions but also incorporate the +history information. simulation results demonstrate that our designed reward function leads to significant performance +improvement compared with existing algorithms.",3 +"abstract +we consider a class of interdependent security games on networks where each node chooses +a personal level of security investment. the attack probability experienced by a node is a +function of her own investment and the investment by her neighbors in the network. most of +the existing work in these settings considers players who are risk-neutral. in contrast, studies in +behavioral decision theory have shown that individuals often deviate from risk-neutral behavior +while making decisions under uncertainty. in particular, the true probabilities associated with +uncertain outcomes are often transformed into perceived probabilities in a highly nonlinear +fashion by the users, which then influence their decisions. in this paper, we investigate the effects +of such behavioral probability weightings by the nodes on their optimal investment strategies +and the resulting security risk profiles that arise at the nash equilibria of interdependent network +security games. we characterize graph topologies that achieve the largest and smallest worst +case average attack probabilities at nash equilibria in total effort games, and equilibrium +investments in weakest link and best shot games.",3 +"abstract +the co-deployment of radio frequency (rf) and visible light communications (vlc) technologies +has been investigated in indoor environments to enhance network performances and to address specific +quality-of-service (qos) constraints. in this paper, we explore the benefits of employing both technologies when the qos requirements are imposed as limits on the buffer overflow and delay violation +probabilities, which are important metrics in designing low latency wireless networks. particularly, we +consider a multi-mechanism scenario that utilizes rf and vlc links for data transmission in an indoor +environment, and then propose a link selection process through which the transmitter sends data over +the link that sustains the desired qos guarantees the most. considering an on-off data source, we +employ the maximum average data arrival rate at the transmitter buffer and the non-asymptotic bounds +on data buffering delay as the main performance measures. we formulate the performance measures +under the assumption that both links are subject to average and peak power constraints. furthermore, +we investigate the performance levels when either one of the two links is used for data transmission, or +when both are used simultaneously. finally, we show the impacts of different physical layer parameters +on the system performance through numerical analysis.",7 +"abstract +noisy data, non-convex objectives, model misspecification, and numerical instability can all +cause undesired behaviors in machine learning +systems. as a result, detecting actual implementation errors can be extremely difficult. we +demonstrate a methodology in which developers +use an interactive proof assistant to both implement their system and to state a formal theorem +defining what it means for their system to be correct. the process of proving this theorem interactively in the proof assistant exposes all implementation errors since any error in the program +would cause the proof to fail. as a case study, +we implement a new system, certigrad, for optimizing over stochastic computation graphs, and +we generate a formal (i.e. machine-checkable) +proof that the gradients sampled by the system +are unbiased estimates of the true mathematical +gradients. we train a variational autoencoder using certigrad and find the performance comparable to training the same model in tensorflow.",2 +"abstract. we characterize ding modules and complexes over ding-chen +rings. we show that over a ding-chen ring r, the ding projective (resp. +ding injective, resp. ding flat) r-modules coincide with the gorenstein projective (resp. gorenstein injective, resp. gorenstein flat) modules, which in +turn are nothing more than modules appearing as a cycle of an exact complex +of projective (resp. injective, resp. flat) modules. we prove a similar characterization for chain complexes of r-modules: a complex x is ding projective +(resp. ding injective, resp. ding flat) if and only if each component xn is ding +projective (resp. ding injective, resp. ding flat). along the way, we generalize +some results of stovicek and bravo-gillespie-hovey to obtain other interesting +corollaries. for example, we show that over any noetherian ring, any exact +chain complex with gorenstein injective components must have all cotorsion +cycle modules. that is, ext1r (f, zn i) = 0 for any such complex i and flat +module f . on the other hand, over any coherent ring, the cycles of any exact +complex p with projective components must satisfy ext1r (zn p, a) = 0 for any +absolutely pure module a.",0 +"abstract +the goal of a hub-based distance labeling scheme for a network g = (v, e) is to assign a small +subset s(u) ⊆ v to each node u ∈ v , in such a way that for any pair of nodes u, v, the intersection +of hub sets s(u) ∩ s(v) contains a node on the shortest uv-path. +the existence of small hub sets, and consequently efficient shortest path processing algorithms, +for road networks is an empirical observation. a theoretical explanation for this phenomenon was +proposed by abraham et al. (soda 2010) through a network parameter they called highway dimension, which captures the size of a hitting set for a collection of shortest paths of length at least r +intersecting a given ball of radius 2r. in this work, we revisit this explanation, introducing a more +tractable (and directly comparable) parameter based solely on the structure of shortest-path spanning +trees, which we call skeleton dimension. we show that skeleton dimension admits an intuitive definition for both directed and undirected graphs, provides a way of computing labels more efficiently +than by using highway dimension, and leads to comparable or stronger theoretical bounds on hub +set size.",8 +"abstract +the wasserstein metric is an important measure of distance between probability distributions, with +several applications in machine learning, statistics, probability theory, and data analysis. in this +paper, we upper and lower bound minimax rates for the problem of estimating a probability distribution under wasserstein loss, in terms of metric properties, such as covering and packing numbers, +of the underlying sample space. +keywords: wasserstein distance; density estimation; minimax theory; covering number; packing number",10 +"abstract—the internet of things (iot) is continuously growing +to connect billions of smart devices anywhere and anytime in an +internet-like structure, which enables a variety of applications, +services and interactions between human and objects. in the +future, the smart devices are supposed to be able to autonomously +discover a target device with desired features and thus yield +the computing service, network service and data fusion that +leads to the generation of a set of entirely new services and +applications that are not supervised or even imagined by human +beings. the pervasiveness of smart devices, as well as the +heterogeneity of their design and functionalities, raise a major +concern: how can a smart device efficiently discover a desired +target device? in this paper, we propose a social-aware and +distributed (sand) scheme that achieves a fast, scalable and +efficient device discovery in the iot. the proposed sand scheme +adopts a novel device ranking criteria that measures the device’s +degree, social relationship diversity, clustering coefficient and +betweenness. based on the device ranking criteria, the discovery +request can be guided to travel through critical devices that stand +at the major intersections of the network, and thus quickly reach +the desired target device by contacting only a limited number of +intermediate devices. we conduct comprehensive simulations on +both random networks and scale-free networks to evaluate the +performance of sand in terms of the discovery success rate, the +number of devices contacted and the number of communication +hops. the simulation results demonstrate the effectiveness of +sand. with the help of such an intelligent device discovery +as sand, the iot devices, as well as other computing facilities, +software and data on the internet, can autonomously establish +new social connections with each other as human being do. +they can formulate self-organized computing groups to perform +required computing tasks, facilitate a fusion of a variety of +computing service, network service and data to generate novel +applications and services, evolve from the individual aritificial +intelligence to the collaborative intelligence, and eventually enable +the birth of a robot society. +index terms—internet of things, social-aware, distributed, +device discovery, computing, network and data fusion, robot +society.",2 +"abstract. associated to any orthogonal representation of a countable discrete group is a probability +measure-preserving action called the gaussian action. using the polish model formalism we developed +before, we compute the entropy (in the sense of bowen, kerr-li) of gaussian actions when the group is +sofic. computations of entropy for gaussian actions has only been done when the acting group is abelian +and thus our results are new even in the amenable case. fundamental to our approach are methods of +noncommutative harmonic analysis and c ∗ -algebras which replace the fourier analysis used in the abelian +case.",4 +"abstract. the (full) extended plus closure was developed as a replacement +for tight closure in mixed characteristic rings. here it is shown by adapting andré’s perfectoid algebra techniques that, for complete local rings, this +closure has the colon-capturing property. in fact, more generally, if r is +a (possibly ramified) complete regular local ring of mixed characteristic, i +and j are ideals of r, and the local domain s is a finite r-module, then +(is : j) ⊆ (i : j)s epf . a consequence is that all ideals in regular local rings +are closed, a fact which implies the validity of the direct summand conjecture +and the briançon-skoda theorem in mixed characteristic.",0 +"abstract. we introduce a class of countable groups by some abstract grouptheoretic conditions. it includes linear groups with finite amenable radical +and finitely generated residually finite groups with some non-vanishing ℓ2 betti numbers that are not virtually a product of two infinite groups. further, +it includes acylindrically hyperbolic groups. for any group γ in this class we +determine the general structure of the possible lattice embeddings of γ, i.e. of +all compactly generated, locally compact groups that contain γ as a lattice. +this leads to a precise description of possible non-uniform lattice embeddings +of groups in this class. further applications include the determination of possible lattice embeddings of fundamental groups of closed manifolds with pinched +negative curvature.",4 +"abstract +the paper presents a study of an adaptive approach to lateral skew control for an experimental railway +stand. the preliminary experiments with the real experimental railway stand and simulations with its +3-d mechanical model, indicates difficulties of model-based control of the device. thus, use of neural +networks for identification and control of lateral skew shall be investigated. this paper focuses on realdata based modelling of the railway stand by various neural network models, i.e; linear neural unit and +quadratic neural unit architectures. furthermore, training methods of these neural architectures as such, +real-time-recurrent-learning and a variation of back-propagation-through-time are examined, +accompanied by a discussion of the produced experimental results.",9 +"abstract—we consider the problem of sequential transmission +of gauss–markov sources. we show that in the limit of large +spatial block lengths, greedy compression with respect to the +squared error distortion is optimal; that is, there is no tension between optimizing the distortion of the source in the current time +instant and that of future times. we then extend this result to the +case where at time t a random compression rate rt is allocated +independently of the rate at other time instants. this, in turn, +allows us to derive the optimal performance of sequential coding +over packet-erasure channels with instantaneous feedback. for +the case of packet erasures with delayed feedback, we connect +the problem to that of compression with side information that +is known at the encoder and may be known at the decoder — +where the most recent packets serve as side information that may +have been erased. we conclude the paper by demonstrating that +the loss due to a delay by one time unit is rather small. +index terms—sequential coding of correlated sources, successive refinement, source streaming, packet erasures, source coding +with side information.",7 +"abstract +a general approach to knowledge transfer is introduced in +which an agent controlled by a neural network adapts how it +reuses existing networks as it learns in a new domain. networks trained for a new domain can improve their performance by routing activation selectively through previously +learned neural structure, regardless of how or for what it was +learned. a neuroevolution implementation of this approach +is presented with application to high-dimensional sequential decision-making domains. this approach is more general than previous approaches to neural transfer for reinforcement learning. it is domain-agnostic and requires no prior assumptions about the nature of task relatedness or mappings. +the method is analyzed in a stochastic version of the arcade +learning environment, demonstrating that it improves performance in some of the more complex atari 2600 games, +and that the success of transfer can be predicted based on a +high-level characterization of game dynamics.",9 +"abstract +the vertices of the four-dimensional 600-cell form a non-crystallographic root system whose +corresponding symmetry group is the coxeter group h4 . there are two special coordinate representations of this root system in which they and their corresponding coxeter groups involve only +rational numbers and the golden ratio τ . the two are related by the conjugation τ 7→ τ 0 = −1/τ . +this paper investigates what happens when the two root systems are combined and the group +generated by both versions of h4 is allowed to operate on them. the result is a new, but infinite, +‘root system’ σ which itself turns out to have a natural structure of the unitary group su (2, r) +over the ring r = z[ 21 , τ ] (called here golden numbers). acting upon it is the naturally associated +infinite reflection group h ∞ , which we prove is of index 2 in the orthogonal group o(4, r). the +paper makes extensive use of the quaternions over r and leads to a highly structured discretized +filtration of su (2). we use this to offer a simple and effective way to approximate any element of +su (2) to any degree of accuracy required using the repeated actions of just five fixed reflections, +a process that may find application in computational methods in quantum mechanics.",4 +"abstract—the bittorrent mechanism effectively spreads file +fragments by copying the rarest fragments first. we propose +to apply a mathematical model for the diffusion of fragments +on a p2p in order to take into account both the effects of +peer distances and the changing availability of peers while +time goes on. moreover, we manage to provide a forecast on +the availability of a torrent thanks to a neural network that +models the behaviour of peers on the p2p system. +the combination of the mathematical model and the neural +network provides a solution for choosing file fragments that +need to be copied first, in order to ensure their continuous +availability, counteracting possible disconnections by some +peers. +keywords-dependability, distributed caching, p2p, neural +networks, wavelet analysis.",9 +"abstract—a sum-network is an instance of a function computation problem over a directed acyclic network in which each +terminal node wants to compute the sum over a finite field of the +information observed at all the source nodes. many characteristics of the well-studied multiple unicast network communication +problem also hold for sum-networks due to a known reduction +between the two problems. in this work, we describe an algorithm +to construct families of sum-network instances using incidence +structures. the computation capacity of several of these sumnetwork families is evaluated. unlike the coding capacity of +a multiple unicast problem, the computation capacity of sumnetworks depends on the characteristic of the finite field over +which the sum is computed. this dependence is very strong; +we show examples of sum-networks that have a rate-1 solution +over one characteristic but a rate close to zero over a different +characteristic. additionally, a sum-network can have arbitrarily +different computation capacities for different alphabets. +index terms—network coding, function computation, sumnetworks, characteristic, incidence structures",7 +"abstract. let g be a countable group, sub(g) the (compact, metric) space of all subgroups of g with the chabauty topology and is(g) ⊆ sub(g) the collection of isolated +points. we denote by x! the (polish) group of all permutations of a countable set x. then +the following properties are equivalent: (i) is(g) is dense in sub(g), (ii) g admits a “generic +permutation representation”. namely there exists some τ ∗ ∈ hom(g, x!) such that the collection of permutation representations {ϕ ∈ hom(g, x!) | ϕ is permutation isomorphic to τ ∗ } +is co-meager in hom(g, x!). we call groups satisfying these properties solitary. examples +of solitary groups include finitely generated lerf groups and groups with countably many +subgroups.",4 +"abstract +this paper presents a simple extension of the binary heap, the list +heap. we use list heaps to demonstrate the idea of adaptive heaps: +heaps whose performance is a function of both the size of the problem +instance and the disorder of the problem instance. we focus on the presortedness of the input sequence as a measure of disorder for the problem +instance. a number of practical applications that rely on heaps deal with +input that is not random. even random input contains presorted subsequences. devising heaps that exploit this structure may provide a means +for improving practical performance. we present some basic empirical +tests to support this claim. additionally, adaptive heaps may provide an +interesting direction for theoretical investigation.",8 +"abstract +human aware planning requires an agent to be aware of the intentions, capabilities and mental model of the human in the loop +during its decision process. this can involve generating plans that +are explicable to a human observer as well as the ability to provide +explanations when such plans cannot be generated. in this paper, +we bring these two concepts together and show how an agent can +account for both these needs and achieve a trade-off during the plan +generation process itself by means of a model-space search method +mega. this in effect provides a comprehensive perspective of what it +means for a decision-making agent to be “human-aware” by bringing together existing principles of planning under the umbrella +of a single plan generation process. we situate our discussion in +the context of recent work on explicable planning and explanation +generation, and illustrate these concepts in modified versions of +two well-known planning domains, as well as in a demonstration +of a robot involved in a typical search and reconnaissance task with +an external supervisor. human factor studies in the latter highlight +the usefulness of the proposed approaches.",2 +"abstract +this is a contribution to the formalization of the concept of +agents in multivariate markov chains. agents are commonly +defined as entities that act, perceive, and are goal-directed. in +a multivariate markov chain (e.g. a cellular automaton) the +transition matrix completely determines the dynamics. this +seems to contradict the possibility of acting entities within +such a system. here we present definitions of actions and perceptions within multivariate markov chains based on entitysets. entity-sets represent a largely independent choice of a +set of spatiotemporal patterns that are considered as all the +entities within the markov chain. for example, the entityset can be chosen according to operational closure conditions +or complete specific integration. importantly, the perceptionaction loop also induces an entity-set and is a multivariate +markov chain. we then show that our definition of actions +leads to non-heteronomy and that of perceptions specialize to +the usual concept of perception in the perception-action loop.",2 +"abstract— recurrent neural networks (rnns) with +sophisticated units that implement a gating mechanism +have emerged as powerful technique for modeling +sequential signals such as speech or electroencephalography +(eeg). the latter is the focus on this paper. a significant +big data resource, known as the tuh eeg corpus +(tueeg), has recently become available for eeg research, +creating a unique opportunity to evaluate these recurrent +units on the task of seizure detection. in this study, we +compare two types of recurrent units: long short-term +memory units (lstm) and gated recurrent units (gru). +these are evaluated using a state of the art hybrid +architecture that integrates convolutional neural +networks (cnns) with rnns. we also investigate a variety +of initialization methods and show that initialization is +crucial since poorly initialized networks cannot be trained. +furthermore, we explore regularization of these +convolutional gated recurrent networks to address the +problem of overfitting. our experiments revealed that +convolutional lstm networks can achieve significantly +better performance than convolutional gru networks. the +convolutional lstm architecture with proper initialization +and regularization delivers 30% sensitivity at 6 false alarms +per 24 hours.",2 +"abstract. we study the nodetrix planarity testing problem for flat +clustered graphs when the maximum size of each cluster is bounded by +a constant k. we consider both the case when the sides of the matrices +to which the edges are incident are fixed and the case when they can be +arbitrarily chosen. we show that nodetrix planarity testing with fixed +3 +sides can be solved in o(k3k+ 2 n3 ) time for every flat clustered graph +that can be reduced to a partial 2-tree by collapsing its clusters into +single vertices. in the general case, nodetrix planarity testing with fixed +sides can be solved in o(n3 ) time for k = 2, but it is np-complete for +any k ≥ 3. nodetrix planarity testing remains np-complete also in the +free side model when k > 4.",8 +"abstract +recently, macdonald et. al. showed that many algorithmic problems for finitely generated nilpotent groups including computation of normal forms, the subgroup membership problem, the conjugacy problem, and computation of subgroup presentations can be done in logspace. here +we follow their approach and show that all these problems are complete for the uniform circuit +class tc0 – uniformly for all r-generated nilpotent groups of class at most c for fixed r and c. +in order to solve these problems in tc0 , we show that the unary version of the extended gcd +problem (compute greatest common divisors and express them as linear combinations) is in tc0 . +moreover, if we allow a certain binary representation of the inputs, then the word problem +and computation of normal forms is still in uniform tc0 , while all the other problems we examine +are shown to be tc0 -turing reducible to the binary extended gcd problem. +keywords and phrases nilpotent groups, tc0 , abelian groups, word problem, conjugacy problem, +subgroup membership problem, greatest common divisors",4 +"abstract +designing mechanisms that leverage cooperation between agents +has been a long-lasting goal in multiagent systems. the task is +especially challenging when agents are selfish, lack common goals +and face social dilemmas, i.e., situations in which individual interest conflicts with social welfare. past works explored mechanisms +that explain cooperation in biological and social systems, providing important clues for the aim of designing cooperative artificial +societies. in particular, several works show that cooperation is able +to emerge when specific network structures underlie agents’ interactions. notwithstanding, social dilemmas in which defection +is highly tempting still pose challenges concerning the effective +sustainability of cooperation. here we propose a new redistribution mechanism that can be applied in structured populations of +agents. importantly, we show that, when implemented locally (i.e., +agents share a fraction of their wealth surplus with their nearest +neighbors), redistribution excels in promoting cooperation under +regimes where, before, only defection prevailed.",2 +"abstract +lfr is a popular benchmark graph generator used to evaluate community detection +algorithms. we present em-lfr, the first external memory algorithm able to generate +massive complex networks following the lfr benchmark. its most expensive component +is the generation of random graphs with prescribed degree sequences which can be divided +into two steps: the graphs are first materialized deterministically using the havel-hakimi +algorithm, and then randomized. our main contributions are em-hh and em-es, two i/oefficient external memory algorithms for these two steps. we also propose em-cm/es, an +alternative sampling scheme using the configuration model and rewiring steps to obtain a +random simple graph. in an experimental evaluation we demonstrate their performance; our +implementation is able to handle graphs with more than 37 billion edges on a single machine, +is competitive with a massive parallel distributed algorithm, and is faster than a state-of-theart internal memory implementation even on instances fitting in main memory. em-lfr’s +implementation is capable of generating large graph instances orders of magnitude faster than +the original implementation. we give evidence that both implementations yield graphs with +matching properties by applying clustering algorithms to generated instances. similarly, we +analyse the evolution of graph properties as em-es is executed on networks obtained with +em-cm/es and find that the alternative approach can accelerate the sampling process.",8 +"abstract +understanding the behaviour of heuristic search methods is a challenge. this +even holds for simple local search methods such as 2-opt for the traveling salesperson problem. in this paper, we present a general framework that is able to construct a diverse set of instances that are hard or easy for a given search heuristic. +such a diverse set is obtained by using an evolutionary algorithm for constructing hard or easy instances that are diverse with respect to different features of +the underlying problem. examining the constructed instance sets, we show that +many combinations of two or three features give a good classification of the tsp +instances in terms of whether they are hard to be solved by 2-opt.",9 +"abstract +the plane diameter completion problem asks, given a plane graph g and +a positive integer d, if it is a spanning subgraph of a plane graph h that has +diameter at most d. we examine two variants of this problem where the input +comes with another parameter k. in the first variant, called bpdc, k upper +bounds the total number of edges to be added and in the second, called bfpdc, +k upper bounds the number of additional edges per face. we prove that both +problems are np-complete, the first even for 3-connected graphs of face-degree at +most 4 and the second even when k = 1 on 3-connected graphs of face-degree at +most 5. in this paper we give parameterized algorithms for both problems that +run in o(n3 ) + 22",8 +"abstract +constructing a sparse spanning subgraph is a fundamental primitive in graph theory. in +this paper, we study this problem in the centralized local model, where the goal is to decide +whether an edge is part of the spanning subgraph by examining only a small part of the input; +yet, answers must be globally consistent and independent of prior queries. +unfortunately, maximally sparse spanning subgraphs, i.e., spanning trees, cannot be constructed efficiently in this model. therefore, we settle for a spanning subgraph containing at +most (1 + )n edges (where n is the number of vertices and  is a given approximation/sparsity +parameter). we achieve a query complexity of õ(poly(∆/ε)n2/3 ),1 where ∆ is the maximum +degree of the input graph. our algorithm is the first to do so on arbitrary bounded degree +graphs. moreover, we achieve the additional property that our algorithm outputs a spanner, i.e., +distances are approximately preserved. with high probability, for each deleted edge there is a +path of o(log n · (∆ + log n)/) hops in the output that connects its endpoints.",8 +"abstract +deep convolutional neural networks (cnns) have shown excellent performance in +object recognition tasks and dense classification problems such as semantic segmentation. however, training deep neural networks on large and sparse datasets +is still challenging and can require large amounts of computation and memory. +in this work, we address the task of performing semantic segmentation on large +data sets, such as three-dimensional medical images. we propose an adaptive +sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. our +contribution is threefold: 1) we give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. +2) we propose a deep dual path cnn that captures information at fine and +coarse scales, resulting in a network with a large field of view and high resolution outputs. 3) we show that our method is able to attain new state-of-the-art +results on the visceral anatomy benchmark.",1 +"abstract +variable selection for models including interactions between explanatory variables often needs to obey certain hierarchical constraints. +the weak or strong structural hierarchy requires that the existence of +an interaction term implies at least one or both associated main effects +to be present in the model. lately, this problem has attracted a lot of +attention, but existing computational algorithms converge slow even +with a moderate number of predictors. moreover, in contrast to the +rich literature on ordinary variable selection, there is a lack of statistical theory to show reasonably low error rates of hierarchical variable +selection. this work investigates a new class of estimators that make +use of multiple group penalties to capture structural parsimony. we +give the minimax lower bounds for strong and weak hierarchical variable selection and show that the proposed estimators enjoy sharp rate +oracle inequalities. a general-purpose algorithm is developed with +guaranteed convergence and global optimality. simulations and real +data experiments demonstrate the efficiency and efficacy of the proposed approach.",10 +"abstract—millimeter wave (mmwave) communications have +been considered as a key technology for next generation cellular +systems and wi-fi networks because of its advances in providing orders-of-magnitude wider bandwidth than current wireless +networks. economical and energy-efficient analog/digial hybrid +precoding and combining transceivers have been often proposed +for mmwave massive multiple-input multiple-output (mimo) +systems to overcome the severe propagation loss of mmwave +channels. one major shortcoming of existing solutions lies in the +assumption of infinite or high-resolution phase shifters (pss) to +realize the analog beamformers. however, low-resolution pss are +typically adopted in practice to reduce the hardware cost and +power consumption. motivated by this fact, in this paper, we +investigate the practical design of hybrid precoders and combiners with low-resolution pss in mmwave mimo systems. in +particular, we propose an iterative algorithm which successively +designs the low-resolution analog precoder and combiner pair for +each data stream, aiming at conditionally maximizing the spectral +efficiency. then, the digital precoder and combiner are computed +based on the obtained effective baseband channel to further +enhance the spectral efficiency. in an effort to achieve an even +more hardware-efficient large antenna array, we also investigate +the design of hybrid beamformers with one-bit resolution (binary) +pss, and present a novel binary analog precoder and combiner +optimization algorithm with quadratic complexity in the number +of antennas. the proposed low-resolution hybrid beamforming +design is further extended to multiuser mimo communication +systems. simulation results demonstrate the performance advantages of the proposed algorithms compared to existing lowresolution hybrid beamforming designs, particularly for the onebit resolution ps scenario. +index terms—millimeter wave (mmwave) communications, +hybrid precoder, multiple-input multiple-output (mimo), phase +shifters, one-bit quantization.",7 +"abstract: we study the recovery conditions of weighted ℓ1 minimization for real-valued signal reconstruction from phaseless compressive sensing measurements when partial support information is available. +a strong restricted isometry property condition is provided to ensure the stable recovery. moreover, we +present the weighted null space property as the sufficient and necessary condition for the success of k-sparse +phaseless recovery via weighted ℓ1 minimization. numerical experiments are conducted to illustrate our +results. +keywords: phaseless compressive sensing; partial support information; strong restricted isometry property; weighted null space property.",7 +"abstract. in this paper we derive inferential results for a new index of inequality, specifically defined for capturing significant changes observed both +in the left and in the right tail of the income distributions. the latter shifts +are an apparent fact for many countries like us, germany, uk, and france +in the last decades, and are a concern for many policy makers. we propose +two empirical estimators for the index, and show that they are asymptotically +equivalent. afterwards, we adopt one estimator and prove its consistency and +asymptotic normality. finally we introduce an empirical estimator for its variance and provide conditions to show its convergence to the finite theoretical +value. an analysis of real data on net income from the bank of italy survey +of income and wealth is also presented, on the base of the obtained inferential +results. +keywords and phrases: income inequality, lorenz curve, gini index, consistency, asymptotic normality, economic inequality, confidence interval, nonparametric estimator.",10 +"abstract—evolution sculpts both the body plans and nervous +systems of agents together over time. in contrast, in ai and +robotics, a robot’s body plan is usually designed by hand, and +control policies are then optimized for that fixed design. the task +of simultaneously co-optimizing the morphology and controller +of an embodied robot has remained a challenge. in psychology, +the theory of embodied cognition posits that behavior arises +from a close coupling between body plan and sensorimotor +control, which suggests why co-optimizing these two subsystems +is so difficult: most evolutionary changes to morphology tend +to adversely impact sensorimotor control, leading to an overall +decrease in behavioral performance. here, we further examine +this hypothesis and demonstrate a technique for “morphological innovation protection”, which temporarily reduces selection +pressure on recently morphologically-changed individuals, thus +enabling evolution some time to “readapt” to the new morphology +with subsequent control policy mutations. we show the potential +for this method to avoid local optima and converge to similar +highly fit morphologies across widely varying initial conditions, +while sustaining fitness improvements further into optimization. +while this technique is admittedly only the first of many steps +that must be taken to achieve scalable optimization of embodied +machines, we hope that theoretical insight into the cause of +evolutionary stagnation in current methods will help to enable +the automation of robot design and behavioral training – while +simultaneously providing a testbed to investigate the theory of +embodied cognition.",2 +"abstract. we propose a method for encoding iterators (and recursion +operators in general) using interaction nets (ins). there are two main +applications for this: the method can be used to obtain a visual notation for functional programs; and it can be used to extend the existing +translations of the λ-calculus into ins to languages with recursive types.",6 +"abstract +humans can understand and produce new utterances effortlessly, thanks to their compositional +skills. once a person learns the meaning of a +new verb “dax,” he or she can immediately understand the meaning of “dax twice” or “sing and +dax.” in this paper, we introduce the scan domain, consisting of a set of simple compositional +navigation commands paired with the corresponding action sequences. we then test the zero-shot +generalization capabilities of a variety of recurrent neural networks (rnns) trained on scan +with sequence-to-sequence methods. we find that +rnns can make successful zero-shot generalizations when the differences between training and +test commands are small, so that they can apply “mix-and-match” strategies to solve the task. +however, when generalization requires systematic compositional skills (as in the “dax” example +above), rnns fail spectacularly. we conclude +with a proof-of-concept experiment in neural machine translation, suggesting that lack of systematicity might be partially responsible for neural +networks’ notorious training data thirst.",2 +"abstract +in this paper, an alternate module (a, φ) is a finite abelian group a +with a z-bilinear application φ : a × a → q/z which is alternate (i.e. +zero on the diagonal). we shall prove that any alternate module is +subsymplectic, i.e. if (a, φ) has a lagrangian of cardinal n then there +exists an abelian group b of order n such that (a, φ) is a submodule +of the standard symplectic module b × b ∗ .",4 +"abstract +object-oriented scripting languages such as javascript or python gain in +popularity due to their flexibility. still, the growing code bases written in the +languages call for methods that make possible to automatically control the +properties of the programs that ensure their stability in the running time. we +propose a type system, called lucretia, that makes possible to control the object structure of languages with reflection. subject reduction and soundness +of the type system with respect to the semantics of the language is proved.",6 +"abstract machine comprising an execution engine and protected memory: the adversary cannot +see sensitive data as it is being operated on, nor can it observe such data at rest in memory. such +an abstract machine can be realized by encrypting the data in memory and then performing computations using cryptographic mechanisms (e.g., secure multi-party computation [yao 1986]) or +secure processors [hoekstra 2015; suh et al. 2003; thekkath et al. 2000]. +unfortunately, a secure abstract machine does not defend against an adversary that can observe +memory access patterns [islam et al. 2012; maas et al. 2013; zhuang et al. 2004] and instruction +timing [brumley and boneh 2003; kocher 1996], among other “side” channels of information. for +cloud computing, such an adversary is the cloud provider itself, which has physical access to its +machines, and so can observe traffic on the memory bus. +a countermeasure against an unscrupulous provider is to augment the secure processor to store +code and data in oblivious ram (oram) [maas et al. 2013; suh et al. 2003]. first proposed by +goldreich and ostrovsky [goldreich 1987; goldreich and ostrovsky 1996], oram obfuscates the +mapping between addresses and data, in effect “encrypting” the addresses along with the data. replacing ram with oram solves (much of) the security problem but incurs a substantial slowdown +in practical situations [liu et al. 2015, 2013; maas et al. 2013] as reads/writes add overhead that is +polylogarithmic in the size of the memory. +recent work has explored methods for reducing the cost of programming with oram. liu et al. +[2015, 2013, 2014] developed a family of type systems to check when partial use of oram (alongside normal, encrypted ram) results in no loss of security; i.e., only when the addresses of secret +data could (indirectly) reveal sensitive information must the data be stored in oram. this optimization can provide order-of-magnitude (and asymptotic) performance improvements. wang +working draft as of july 7, 2017.",6 +"abstract +in this paper we present a new approach to control variates for improving computational efficiency of ensemble monte carlo. we present the approach using simulation of paths of a time-dependent nonlinear stochastic equation. the core idea +is to extract information at one or more nominal model parameters and use this +information to gain estimation efficiency at neighboring parameters. this idea is the +basis of a general strategy, called database monte carlo (dbmc), for improving +efficiency of monte carlo. in this paper we describe how this strategy can be implemented using the variance reduction technique of control variates (cv). we show +that, once an initial setup cost for extracting information is incurred, this approach +can lead to significant gains in computational efficiency. the initial setup cost is +justified in projects that require a large number of estimations or in those that are +to be performed under real-time constraints. +key words: monte carlo, variance reduction, control variates +pacs: s05.10.ln, 02.70.uu, 02.70.tt",5 +"abstract: regular variation is often used as the starting point for modeling multivariate heavy-tailed +data. a random vector is regularly varying if and only if its radial part r is regularly varying and +is asymptotically independent of the angular part θ as r goes to infinity. the conditional limiting +distribution of θ given r is large characterizes the tail dependence of the random vector and hence its +estimation is the primary goal of applications. a typical strategy is to look at the angular components +of the data for which the radial parts exceed some threshold. while a large class of methods has been +proposed to model the angular distribution from these exceedances, the choice of threshold has been +scarcely discussed in the literature. in this paper, we describe a procedure for choosing the threshold +by formally testing the independence of r and θ using a measure of dependence called distance +covariance. we generalize the limit theorem for distance covariance to our unique setting and propose +an algorithm which selects the threshold for r. this algorithm incorporates a subsampling scheme +that is also applicable to weakly dependent data. moreover, it avoids the heavy computation in the +calculation of the distance covariance, a typical limitation for this measure. the performance of our +method is illustrated on both simulated and real data. +keywords and phrases: distance covariance, heavy-tailed data, multivariate regular variation, +threshold selection.",10 +"abstract we consider the complex ind-group g = sl(∞, c) and its real forms +g0 = su(∞, ∞), su(p, ∞), sl(∞, r), sl(∞, h). our main object of study are the +g0 -orbits on an ind-variety g/p for an arbitrary splitting parabolic ind-subgroup +p ⊂ g, under the assumption that the subgroups g0 ⊂ g and p ⊂ g are aligned +in a natural way. we prove that the intersection of any g0 -orbit on g/p with a +finite-dimensional flag variety gn /pn from a given exhaustion of g/p via gn /pn +for n → ∞, is a single (g0 ∩ gn )-orbit. we also characterize all ind-varieties g/p +on which there are finitely many g0 -orbits, and provide criteria for the existence of +open and closed g0 -orbits on g/p in the case of infinitely many g0 -orbits. +keywords: homogeneous ind-variety, real group orbit, generalized flag. +ams subject classification: 14l30, 14m15, 22f30, 22e65.",4 +"abstract +a space x is said to be lipschitz 1-connected if every lipschitz loop +γ : s 1 ñ x bounds a oplippγqq-lipschitz disk f : d2 ñ x. a lipschitz 1-connected space admits a quadratic isoperimetric inequality, +but it is unknown whether the converse is true. cornulier and tessera +showed that certain solvable lie groups have quadratic isoperimetric +inequalities, and we extend their result to show that these groups are +lipschitz 1-connected.",4 +"abstract +neuroscience​ ​research​ ​has​ ​produced​ ​many​ ​ ​theories​ ​and​ ​computational​ ​neural​ ​models​ ​of​ ​sensory +nervous​ ​systems.​ ​ ​notwithstanding​ ​many​ ​different​ ​perspectives​ ​towards​ ​developing​ ​intelligent​ ​machines, +artificial​ ​intelligence​ ​has​ ​ultimately​ ​been​ ​influenced​ ​by​ ​neuroscience.​ ​ ​therefore,​ ​this​ ​paper​ ​provides​ ​an +introduction​ ​to​ ​biologically​ ​inspired​ ​machine​ ​intelligence​ ​by​ ​exploring​ ​the​ ​basic​ ​principles​ ​of​ ​sensation +and​ ​perception​ ​as​ ​well​ ​as​ ​the​ ​structure​ ​and​ ​behavior​ ​of​ ​biological​ ​sensory​ ​nervous​ ​systems​ ​like​ ​the +neocortex.​ ​ ​concepts​ ​like​ ​spike​ ​timing,​ ​synaptic​ ​plasticity,​ ​inhibition,​ ​neural​ ​structure,​ ​and​ ​neural​ ​behavior +are​ ​applied​ ​to​ ​a​ ​new​ ​model,​ ​simple​ ​cortex​ ​(sc).​ ​ ​a​ ​software​ ​implementation​ ​of​ ​sc​ ​has​ ​been​ ​built​ ​and +demonstrates​ ​fast​ ​observation,​ ​learning,​ ​and​ ​prediction​ ​of​ ​spatio-temporal​ ​sensory-motor​ ​patterns​ ​and +sequences.​ ​ ​finally,​ ​this​ ​paper​ ​suggests​ ​future​ ​areas​ ​of​ ​improvement​ ​and​ ��growth​ ​for​ ​simple​ ​cortex​ ​and +other​ ​related​ ​machine​ ​intelligence​ ​models.",2 +"abstract +in this paper, we show that all coleman automorphisms of a finite +group with self-central minimal non-trivial characteristic subgroup are inner; therefore the normalizer property holds for these groups. using our +methods we show that the holomorph and wreath product of finite simple +groups, among others, have no non-inner coleman automorphisms. as a +further application of our theorems, we provide partial answers to questions raised by m. hertweck and w. kimmerle. furthermore, we characterize the coleman automorphisms of extensions of a finite nilpotent +group by a cyclic p-group. lastly, we note that class-preserving automorphisms of 2-power order of some nilpotent-by-nilpotent groups are inner, +extending a result by j. hai and j. ge.",4 +"abstract +we present an information-theoretic framework for bounding the number of labeled samples needed to train a +classifier in a parametric bayesian setting. using ideas from rate-distortion theory, we derive bounds on the average +lp distance between the learned classifier and the true maximum a posteriori classifier—which are well-established +surrogates for the excess classification error due to imperfect learning. we provide lower and upper bounds on +the rate-distortion function, using lp loss as the distortion measure, of a maximum a priori classifier in terms +of the differential entropy of the posterior distribution and a quantity called the interpolation dimension, which +characterizes the complexity of the parametric distribution family. in addition to expressing the information content +of a classifier in terms of lossy compression, the rate-distortion function also expresses the minimum number of +bits a learning machine needs to extract from training data in order to learn a classifier to within a specified lp +tolerance. then, we use results from universal source coding to express the information content in the training +data in terms of the fisher information of the parametric family and the number of training samples available. +the result is a framework for computing lower bounds on the bayes lp risk. this framework complements the +well-known probably approximately correct (pac) framework, which provides minimax risk bounds involving the +vapnik-chervonenkis dimension or rademacher complexity. whereas the pac framework provides upper bounds +the risk for the worst-case data distribution, the proposed rate-distortion framework lower bounds the risk averaged +over the data distribution. we evaluate the bounds for a variety of data models, including categorical, multinomial, +and gaussian models. in each case the bounds are provably tight orderwise, and in two cases we prove that the +bounds are tight up to multiplicative constants. +index terms +supervised learning; rate-distortion theory; bayesian methods; parametric statistics.",7 +"abstract +we present a sorting algorithm for the case of recurrent random comparison errors. the algorithm +essentially achieves simultaneously good properties of previous algorithms for sorting n distinct +elements in this model. in particular, it runs in o(n2 ) time, the maximum dislocation of the +elements in the output is o(log n), while the total dislocation is o(n). these guarantees are the +best possible since we prove that even randomized algorithms cannot achieve o(log n) maximum +dislocation with high probability, or o(n) total dislocation in expectation, regardless of their +running time. +1998 acm subject classification f.2.2 sorting and searching +keywords and phrases sorting, recurrent comparison error, maximum and total dislocation",8 +abstract.,0 +"abstract—recently, a new polynomial basis over binary extension fields was proposed such that the fast fourier transform +(fft) over such fields can be computed in the complexity of +order o(n lg(n)), where n is the number of points evaluated in +fft. in this work, we reformulate this fft algorithm such that it +can be easier understood and be extended to develop frequencydomain decoding algorithms for (n = 2m , k) systematic reedsolomon (rs) codes over f2m , m ∈ z+ , with n − k a power of +two. first, the basis of syndrome polynomials is reformulated in +the decoding procedure so that the new transforms can be applied +to the decoding procedure. a fast extended euclidean algorithm +is developed to determine the error locator polynomial. the +computational complexity of the proposed decoding algorithm +is o(n lg(n − k) + (n − k) lg2 (n − k)), improving upon the best +currently available decoding complexity o(n lg2 (n) lg lg(n)), and +reaching the best known complexity bound that was established +by justesen in 1976. however, justesen’s approach is only for the +codes over some specific fields, which can apply cooley-tucky +ffts. as revealed by the computer simulations, the proposed +decoding algorithm is 50 times faster than the conventional one +for the (216 , 215 ) rs code over f216 .",8 +abstract,3 +"abstract set. a family {z s : s ∈ s} of real random variables is called a centered gaussian +process when each element z s has mean zero and each finite subcollection {z s1 , . . . , z sn } has a jointly +gaussian distribution. +fact a.3 (gaussian minimax theorem). let t and u be finite sets. consider two centered gaussian processes +{x t u } and {y t u }, indexed over t ×u . for all choices of indices, suppose that + 2 +2 + +e x t u = e y t u +e x t u x t u0 ≤ e yt u yt u0 + + +e x t u x t 0 u 0 ≥ e y t u y t 0 u 0 when t 6= t 0 . +then, for all real numbers λt u and ζ, +¾ +½ +¾ +½ +¡ +¢ +¡ +¢ +p min max λt u + x t u ≥ ζ ≥ p min max λt u + y t u ≥ ζ . +t ∈t u∈u",7 +"abstract— almost-global orientation trajectory tracking for +a rigid body with external actuation has been well studied +in the literature, and in the geometric setting as well. the +tracking control law relies on the fact that a rigid body is a +simple mechanical system (sms) on the 3−dimensional group of +special orthogonal matrices. however, the problem of designing +feedback control laws for tracking using internal actuation +mechanisms, like rotors or control moment gyros, has received +lesser attention from a geometric point of view. an internally +actuated rigid body is not a simple mechanical system, and the +phase-space here evolves on the level set of a momentum map. +in this note, we propose a novel proportional integral derivative +(pid) control law for a rigid body with 3 internal rotors, that +achieves tracking of feasible trajectories from almost all initial +conditions.",3 +"abstract. let siβ (q) be the semi-invariant ring of β-dimensional representations of a quiver q. suppose that (q, β) projects to another quiver with +dimension vector (q′ , β ′ ) through an exceptional representation e. we show +that if siβ (q) is the upper cluster algebra associated to an ice quiver ∆, then +siβ ′ (q′ ) is the upper cluster algebra associated to ∆′ , where ∆′ is obtained +from ∆ through simple operations depending on e. we also study the relation +of their bases using the quiver with potential model.",0 +"abstract +heuristic optimisers which search for an optimal +configuration of variables relative to an objective +function often get stuck in local optima where the +algorithm is unable to find further improvement. +the standard approach to circumvent this problem involves periodically restarting the algorithm +from random initial configurations when no further improvement can be found. we propose a +method of partial reinitialization, whereby, in an +attempt to find a better solution, only sub-sets of +variables are re-initialised rather than the whole +configuration. much of the information gained +from previous runs is hence retained. this leads +to significant improvements in the quality of the +solution found in a given time for a variety of optimisation problems in machine learning.",9 +"abstract. we show that if a graded submodule of a noetherian module cannot be written as a proper intersection of graded submodules, then it cannot +be written as a proper intersection of submodules at all. more generally, we +show that a natural extension of the index of reducibility to the graded setting +coincides with the ordinary index of reducibility. we also investigate the question of uniqueness of the components in a graded-irreducible decomposition, +as well as the relation between the index of reducibility of a non-graded ideal +and that of its largest graded subideal.",0 +"abstract. in this paper we study the classification of right-angled artin groups up to commensurability. we characterise the commensurability classes of raags defined by trees of diameter +4. in particular, we prove a conjecture of behrstock and neumann that there are infinitely many +commensurability classes. hence, we give first examples of raags that are quasi-isometric but +not commensurable.",4 +"abstract—stripes is a deep neural network (dnn) accelerator +that uses bit-serial computation to offer performance that is +proportional to the fixed-point precision of the activation values. +the fixed-point precisions are determined a priori using profiling +and are selected at a per layer granularity. this paper presents +dynamic stripes, an extension to stripes that detects precision +variance at runtime and at a finer granularity. this extra level of +precision reduction increases performance by 41% over stripes.",9 +"abstract. we consider three notions of connectivity and their interactions in +partially ordered sets coming from reduced factorizations of an element in a generated group. while one form of connectivity essentially reflects the connectivity +of the poset diagram, the other two are a bit more involved: hurwitz-connectivity +has its origins in algebraic geometry, and shellability in topology. we propose a +framework to study these connectivity properties in a uniform way. our main +tool is a certain total order of the generators that is compatible with the chosen +element.",4 +"abstract—programming languages themselves have a limited +number of reserved keywords and character based tokens that +define the language specification. however, programmers have a +rich use of natural language within their code through comments, +text literals and naming entities. the programmer defined names +that can be found in source code are a rich source of information +to build a high level understanding of the project. the goal of +this paper is to apply topic modeling to names used in over +13.6 million repositories and perceive the inferred topics. one +of the problems in such a study is the occurrence of duplicate +repositories not officially marked as forks (obscure forks). we +show how to address it using the same identifiers which are +extracted for topic modeling. +we open with a discussion on naming in source code, we then +elaborate on our approach to remove exact duplicate and fuzzy +duplicate repositories using locality sensitive hashing on the +bag-of-words model and then discuss our work on topic modeling; +and finally present the results from our data analysis together +with open-access to the source code, tools and datasets. +index terms—programming, open source, source code, software repositories, git, github, topic modeling, artm, locality +sensitive hashing, minhash, open dataset, data.world.",6 +"abstract—typical coordination schemes for future power +grids require two-way communications. since the number of end +power-consuming devices is large, the bandwidth requirements +for such two-way communication schemes may be prohibitive. +motivated by this observation, we study distributed coordination schemes that require only one-way limited communications. +in particular, we investigate how dual descent distributed +optimization algorithm can be employed in power networks +using one-way communication. in this iterative algorithm, +system coordinators broadcast coordinating (or pricing) signals +to the users/devices who update power consumption based +on the received signal. then system coordinators update the +coordinating signals based on the physical measurement of the +aggregate power usage. we provide conditions to guarantee the +feasibility of the aggregated power usage at each iteration so as +to avoid blackout. furthermore, we prove the convergence of +algorithms under these conditions, and establish its rate of convergence. we illustrate the performance of our algorithms using +numerical simulations. these results show that one-way limited +communication may be viable for coordinating/operating the +future smart grids.",3 +"abstract +cerebellum is part of the brain that occupies only 10% of the brain volume, but it contains about 80% of total +number of brain neurons. new cerebellar function model is developed that sets cerebellar circuits in context of +multibody dynamics model computations, as important step in controlling balance and movement coordination, +functions performed by two oldest parts of the cerebellum. model gives new functional interpretation for +granule cells-golgi cell circuit, including distinct function for upper and lower golgi cell dendritc trees, and +resolves issue of sharing granule cells between purkinje cells. sets new function for basket cells, and for +stellate cells according to position in molecular layer. new model enables easily and direct integration of +sensory information from vestibular system and cutaneous mechanoreceptors, for balance, movement and +interaction with environments. model gives explanation of purkinje cells convergence on deep-cerebellar nuclei. +keywords: new cerebellar function model, cmac, cerebellar elementary processing unit, golgi cell, basket +cell, stellate cell.",5 +"abstract +in the closest string problem one is given a family s of equal-length strings over some +fixed alphabet, and the task is to find a string y that minimizes the maximum hamming distance between y and a string from s. while polynomial-time approximation schemes (ptases) +for this problem are known for a long time [li et al.; j. acm’02], no efficient polynomial-time +approximation scheme (eptas) has been proposed so far. in this paper, we prove that the existence of an eptas for closest string is in fact unlikely, as it would imply that fpt = w[1], +a highly unexpected collapse in the hierarchy of parameterized complexity classes. our proof +also shows that the existence of a ptas for closest string with running time f (ε) · no(1/ε) , +for any computable function f , would contradict the exponential time hypothesis.",8 +"abstract. in this paper we study different questions concerning automorphisms of quandles. for +a conjugation quandle q = conj(g) of a group g we determine several subgroups of aut(q) +and find necessary and sufficient conditions when these subgroups coincide with the whole group +aut(q). in particular, we prove that aut(conj(g)) = z(g) ⋊ aut(g) if and only if either z(g) = 1 +or g is one of the groups z2 , z22 or z3 , what solves [3, problem 4.8]. for a big list of takasaki +quandles t (g) of an abelian group g with 2-torsion we prove that the group of inner automorphisms +inn(t (g)) is a coxeter group, what extends the result [3, theorem 4.2] which describes inn(t (g)) +and aut(t (g)) for an abelian group g without 2-torsion. we study automorphisms of certain +extensions of quandles and determine some interesting subgroups of the automorphism groups of +these quandles. also we classify finite quandles q with 3 ≤ k-transitive action of aut(q).",4 +"abstract +we present a bayesian model selection approach to estimate the intrinsic dimensionality of +a high-dimensional dataset. to this end, we introduce a novel formulation of the probabilisitic principal component analysis model based on a normal-gamma prior distribution. in +this context, we exhibit a closed-form expression of the marginal likelihood which allows to +infer an optimal number of components. we also propose a heuristic based on the expected +shape of the marginal likelihood curve in order to choose the hyperparameters. in nonasymptotic frameworks, we show on simulated data that this exact dimensionality selection +approach is competitive with both bayesian and frequentist state-of-the-art methods. +keywords: dimensionality reduction, marginal likelihood, multivariate analysis, model +selection, principal components.",10 +"abstract—this paper proposes a neuro-adaptive distributive cooperative tracking control with prescribed +performance function (ppf) for highly nonlinear multiagent systems. ppf allows error tracking from a predefined large set to be trapped into a predefined small +set. the key idea is to transform the constrained system +into unconstrained one through transformation of the +output error. agents’ dynamics are assumed to be completely unknown, and the controller is developed for +strongly connected structured network. the proposed +controller allows all agents to follow the trajectory of +the leader node, while satisfying necessary dynamic +requirements. the proposed approach guarantees uniform ultimate boundedness of the transformed error +and the adaptive neural network weights. simulations +include two examples to validate the robustness and +smoothness of the proposed controller against highly +nonlinear heterogeneous networked system with time +varying uncertain parameters and external disturbances. +index terms—prescribed performance, transformed +error, multi-agents, neuro-adaptive, distributed +adaptive control, consensus, transient, steady-state +error.",3 +"abstract +by paying more attention to semantics-based tool generation, programming language semantics +can significantly increase its impact. ultimately, this may lead to “language design assistants” +incorporating substantial amounts of semantic knowledge.",6 +"abstract +evolution of visual object recognition architectures based on convolutional neural networks & +convolutional deep belief networks paradigms has revolutionized artificial vision science. these +architectures extract & learn the real world hierarchical visual features utilizing supervised & +unsupervised learning approaches respectively. both the approaches yet cannot scale up realistically to +provide recognition for a very large number of objects as high as 10k. we propose a two level +hierarchical deep learning architecture inspired by divide & conquer principle that decomposes the large +scale recognition architecture into root & leaf level model architectures. each of the root & leaf level +models is trained exclusively to provide superior results than possible by any 1-level deep learning +architecture prevalent today. the proposed architecture classifies objects in two steps. in the first step the +root level model classifies the object in a high level category. in the second step, the leaf level recognition +model for the recognized high level category is selected among all the leaf models. this leaf level model +is presented with the same input object image which classifies it in a specific category. also we propose a +blend of leaf level models trained with either supervised or unsupervised learning approaches. +unsupervised learning is suitable whenever labelled data is scarce for the specific leaf level models. +currently the training of leaf level models is in progress; where we have trained 25 out of the total 47 +leaf level models as of now. we have trained the leaf models with the best case top-5 error rate of 3.2% +on the validation data set for the particular leaf models. also we demonstrate that the validation error of +the leaf level models saturates towards the above mentioned accuracy as the number of epochs are +increased to more than sixty. the top-5 error rate for the entire two-level architecture needs to be +computed in conjunction with the error rates of root & all the leaf models. the realization of this two +level visual recognition architecture will greatly enhance the accuracy of the large scale object +recognition scenarios demanded by the use cases as diverse as drone vision, augmented reality, retail, +image search & retrieval, robotic navigation, targeted advertisements etc.",9 +"abstract +saturated hydraulic conductivity ksat is a fundamental characteristic in modeling flow and +contaminant transport in soils and sediments. therefore, many models have been +developed to estimate ksat from easily measureable parameters, such as textural +properties, bulk density, etc. however, ksat is not only affected by textural and structural +characteristics, but also by sample dimensions e.g., internal diameter and height. using +the unsoda database and the contrast pattern aided regression (cpxr) method, we +recently developed sample dimension-dependent pedotransfer functions to estimate ksat +from textural data, bulk density, and sample dimensions. the main objectives of this +study were evaluating the proposed pedotransfer functions using a larger database, and +comparing them with seven other models. for this purpose, we selected more than",5 +"abstract +vertex separators, that is, vertex sets whose deletion disconnects two distinguished vertices +in a graph, play a pivotal role in algorithmic graph theory. for many realistic models of the real +world it is necessary to consider graphs whose edge set changes with time. more specifically, the +edges are labeled with time stamps. in the literature, these graphs are referred to as temporal +graphs, temporal networks, time-varying networks, edge-scheduled networks, etc. while there +is an extensive literature on separators in “static” graphs, much less is known for the temporal +setting. building on previous work, we study the problem of finding a small vertex set (the +separator) in a temporal graph with two designated terminal vertices such that the removal of +the set breaks all temporal paths connecting one terminal to the other. herein, we consider +two models of temporal paths: paths that contain arbitrarily many hops per time step (nonstrict) and paths that contain at most one hop per time step (strict). we settle the hardness +dichotomy (np-hardness versus polynomial-time solvability) of both problem variants regarding +the number of time steps of a temporal graph. moreover we prove both problem variants to +be np-complete even on temporal graphs whose underlying graph is planar. we show that on +temporal graphs whose underlying graph is planar, if additionally the number of time steps is +constant then the problem variant for strict paths is solvable in quasi linear time. finally, for +general temporal graphs we introduce the notion of a temporal core (vertices whose incident +edges change over time). we prove that on temporal graphs with constant-sized temporal core, +the non-strict variant is solvable in polynomial time, where the degree of the polynomial is +independent of the size of the temporal core, while the strict variant remains np-complete.",8 +"abstract +we say that a countable group g is mcduff if it admits a free ergodic probability measure +preserving action such that the crossed product is a mcduff ii1 factor. similarly, g is said +to be stable if it admits such an action with the orbit equivalence relation being stable. the +mcduff property, stability, inner amenability and property gamma are subtly related and +several implications and non implications were obtained in [e73, js85, v09, k12a, k12b]. +we complete the picture with the remaining implications and counterexamples.",4 +"abstract +robust attitude tracking control of a small-scale aerobatic helicopter using geometric and backstepping techniques is presented in this article. a nonlinear coupled rotor-fuselage dynamics model of the helicopter is +considered, wherein the rotor flap dynamics is modeled as a first order system, while the fuselage is as a +rigid body dynamically coupled to the rotor system. the robustness of the controller in the presence of both +structured and unstructured disturbances is explored. the structured disturbance is due to uncertainty in +the rotor parameters, and the unstructured perturbation is modeled as an exogenous torque acting on the +fuselage. the performance of the controller is demonstrated in the presence of both types of disturbances +through simulations for a small-scale unmanned helicopter. this work is, possibly, the first systematic attempt +at designing a globally defined robust attitude tracking controller for an aerobatic helicopter which retains +the rotor dynamics and incorporates the uncertainties involved.",3 +"abstract +a seminal result of bulow and klemperer [1989] demonstrates the power of competition for +extracting revenue: when selling a single item to n bidders whose values are drawn i.i.d. from +a regular distribution, the simple welfare-maximizing vcg mechanism (in this case, a second +price-auction) with one additional bidder extracts at least as much revenue in expectation as +the optimal mechanism. the beauty of this theorem stems from the fact that vcg is a priorindependent mechanism, where the seller possesses no information about the distribution, and +yet, by recruiting one additional bidder it performs better than any prior-dependent mechanism +tailored exactly to the distribution at hand (without the additional bidder). +in this work, we establish the first full bulow-klemperer results in multi-dimensional environments, proving that by recruiting additional bidders, the revenue of the vcg mechanism +surpasses that of the optimal (possibly randomized, bayesian incentive compatible) mechanism. +for a given environment with i.i.d. bidders, we term the number of additional bidders needed +to achieve this guarantee the environment’s competition complexity. +using the recent duality-based framework of cai et al. [2016] for reasoning about optimal +revenue, we show that the competition complexity of n bidders with additive valuations over m +independent, regular items is at most n + 2m − 2 and at least log(m). we extend our results +to bidders with additive valuations subject to downward-closed constraints, showing that these +significantly more general valuations increase the competition complexity by at most an additive +m − 1 factor. we further improve this bound for the special case of matroid constraints, and +provide additional extensions as well.",8 +"abstract +there are many studies dealing with the protection or restoration of wetlands +and the sustainable economic growth of cities as separate subjects. this study +investigates the conflict between the two in an area where city growth is threatening a protected wetland area. we develop a stochastic cellular automaton +model for urban growth and apply it to the vecht area surrounding the city +of hilversum in the netherlands, using topographic maps covering the past 150 +years. we investigate the dependence of the urban growth pattern on the values +associated with the protected wetland and other types of landscape surrounding +the city. the conflict between city growth and wetland protection is projected +to occur before 2035, assuming full protection of the wetland. our results also +show that a milder protection policy, allowing some of the wetland to be sacrificed, could be beneficial for maintaining other valuable landscapes. this insight +would be difficult to achieve by other analytical means. we conclude that even +slight changes in usage priorities of landscapes can significantly affect the landscape distribution in near future. our results also point to the importance of a +protection policy to take the value of surrounding landscapes and the dynamic +nature of urban areas into account. +keywords: urban growth, stochastic modeling, cellular automata, wetland, +landscape protection +2010 msc: 91d10, 68u20, 68q80, 91b72, 91b70",5 +"abstract machine for this semantics is +developed in [20]. in [17] bioambients is extended with an operator modelling chain-like biomolecular +structures and applied within a dna transcription example. in [21] a technique for pathway analysis +is defined in terms of static control flow analysis. the authors then apply their technique to model and +investigate an endocytic pathway that facilitates the process of receptor mediated endocytosis. +in this paper we extend the bioambients calculus with a static type system that classifies each ambient with a group type g specifying the kind of compartments in which the ambient can stay [10]. in other +words, a group type g describes the properties of all the ambients and processes of that group. group +types are defined as pairs (s, c), where s and c are sets of group types. intuitively, given g =(s, c), +s denotes the set of ambient groups where ambients of type g can stay, while c is the set of ambient +groups that can be crossed by ambients of type g. on the one hand, the set s can be used to list all the +elements that are allowed within a compartment (complementary, all the elements which are not allowed, +i.e. repelled). on the other hand, the set c lists all the elements that can cross an ambient, thus modelling +permeability properties of a compartment. +starting from group types as bases, we define a type system ensuring that, in a well-typed process, +ambients cannot be nested in a way that violates the group hierarchy. then, we extend the operational +∗ this research is funded by the biobits project (converging technologies 2007, area: biotechnology-ict), regione +piemonte.",5 +"abstract +in the maximum common induced subgraph problem (henceforth mcis), given two +graphs g1 and g2 , one looks for a graph with the maximum number of vertices being both +an induced subgraph of g1 and g2 . mcis is among the most studied classical np-hard +problems. it remains np-hard on many graph classes including forests. in this paper, we +study the parameterized complexity of mcis. as a generalization of clique, it is w[1]-hard +parameterized by the size of the solution. being np-hard even on forests, most structural +parameterizations are intractable. one has to go as far as parameterizing by the size of +the minimum vertex cover to get some tractability. indeed, when parameterized by k := +vc(g1 ) + vc(g2 ) the sum of the vertex cover number of the two input graphs, the problem +was shown to be fixed-parameter tractable, with an algorithm running in time 2o(k log k) . we +complement this result by showing that, unless the eth fails, it cannot be solved in time +2o(k log k) . this kind of tight lower bound has been shown for a few problems and parameters +but, to the best of our knowledge, not for the vertex cover number. we also show that +mcis does not have a polynomial kernel when parameterized by k, unless np ⊆ conp/poly. +finally, we study mcis and its connected variant mccis on some special graph classes and +with respect to other structural parameters.",8 +"abstract—this paper introduces the bloscpack file format and the accompanying python reference implementation. bloscpack is a lightweight, compressed +binary file-format based on the blosc codec and is designed for lightweight, +fast serialization of numerical data. this article presents the features of the +file-format and some some api aspects of the reference implementation, in +particular the ability to handle numpy ndarrays. furthermore, in order to demonstrate its utility, the format is compared both feature- and performance-wise to +a few alternative lightweight serialization solutions for numpy ndarrays. the +performance comparisons take the form of some comprehensive benchmarks +over a range of different artificial datasets with varying size and complexity, the +results of which are presented as the last section of this article. +index terms—applied information theory, +python, numpy, file format, serialization, blosc",6 +"abstract. let q be an anisotropic quadratic form defined over a general field f . in this +article, we formulate a new upper bound for the isotropy index of q after scalar extension +to the function field of an arbitrary quadric. on the one hand, this bound offers a +refinement of a celebrated bound established in earlier work of karpenko-merkurjev and +totaro; on the other, it is a direct generalization of karpenko’s theorem on the possible +values of the first higher isotropy index. we prove its validity in two important cases: +(i) the case where char(f ) 6= 2, and (ii) the case where char(f ) = 2 and q is quasilinear +(i.e., diagonalizable). the two cases are treated separately using completely different +approaches, the first being algebraic-geometric, and the second being purely algebraic.",0 +"abstract—because energy storage systems have better ramping +characteristics than traditional generators, their participation in +frequency regulation should facilitate the balancing of load and +generation. however, they cannot sustain their output indefinitely. +system operators have therefore implemented new frequency +regulation policies to take advantage of the fast ramps that +energy storage systems can deliver while alleviating the problems associated with their limited energy capacity. this paper +contrasts several u.s. policies that directly affect the participation +of energy storage systems in frequency regulation and compares +the revenues that the owners of such systems might achieve under +each policy. +index terms—energy storage, power system economics, power +system frequency control",3 +"abstract +the lack of a comprehensive decision-making approach at the community-level is of significant importance +that must be garnered immediate attention. network-level decision-making algorithms need to solve largescale optimization problems that pose computational challenges. the complexity of the optimization problems increases when various sources of uncertainty are considered. this research introduces a sequential +discrete optimization approach, as a decision-making framework at the community-level for recovery management. the proposed mathematical approach leverages approximate dynamic programming along with +heuristics for the determination of recovery actions. our methodology overcomes the curse of dimensionality +and manages multi-state, large-scale infrastructure systems following disasters. we also provide computational results which suggest that our methodology not only incorporates recovery policies of responsible +public and private entities within the community, but also substantially enhances the performance of their +underlying strategies with limited resources. the methodology can be implemented efficiently to identify +near-optimal recovery decisions following a severe earthquake based on multiple objectives for an modeled +electrical power network of a testbed community coarsely modeled after gilroy, california, united states. +the proposed optimization method supports risk-informed community decision-makers within chaotic posthazard circumstances. +keywords: community resilience, electrical power network, combinatorial optimization, rollout +algorithm, approximate dynamic programming",5 +"abstract +in this work we address the challenging problem of unsupervised learning from videos. +existing methods utilize the spatio-temporal continuity in contiguous video frames as +regularization for the learning process. typically, this temporal coherence of close +frames is used as a free form of annotation, encouraging the learned representations +to exhibit small differences between these frames. but this type of approach fails to +capture the dissimilarity between videos with different content, hence learning less +discriminative features. we here propose two siamese architectures for convolutional +neural networks, and their corresponding novel loss functions, to learn from unlabeled videos, which jointly exploit the local temporal coherence between contiguous +frames, and a global discriminative margin used to separate representations of different videos. an extensive experimental evaluation is presented, where we validate the +proposed models on various tasks. first, we show how the learned features can be used +to discover actions and scenes in video collections. second, we show the benefits of +such an unsupervised learning from just unlabeled videos, which can be directly used +as a prior for the supervised recognition tasks of actions and objects in images, where +our results further show that our features can even surpass a traditional and heavily +supervised pre-training plus fine-tunning strategy. +keywords: unsupervised learning, deep learning, object recognition, object discovery",1 +"abstract—integer forcing is an alternative approach to conventional linear receivers for multiple-antenna systems. in an +integer-forcing receiver, integer linear combinations of messages +are extracted from the received matrix before each individual +message is recovered. recently, the integer-forcing approach +was generalized to a block fading scenario. among the existing +variations of the scheme, the ones with the highest achievable +rates have the drawback that no efficient algorithm is known +to find the best choice of integer linear combination coefficients. +in this paper, we propose several sub-optimal methods to find +these coefficients with low complexity, covering both parallel +and successive interference cancellation versions of the receiver. +simulation results show that the proposed methods attain a performance close to optimal in terms of achievable rates for a given +outage probability. moreover, a low-complexity implementation +using root ldpc codes is developed, showing that the benefits +of the proposed methods also carry on to practice. +index terms—block fading, integer-forcing linear receivers, +lattices, root ldpc codes, successive interference cancellation.",7 +"abstract—background and objective: a system level view of +cellular processes for human and several organisms can be captured by analyzing molecular interaction networks. a molecular +interaction network formed of differentially expressed genes and +their interactions helps to understand key players behind disease +development. so, if the functions of these genes are blocked +by altering their interactions, it would have a great impact in +controlling the disease. due to this promising consequence, the +problem of inferring disease causing genes and their pathways has +attained a crucial position in computational biology research. +however, considering the huge size of interaction networks, +executing computations can be costly. review of literatures shows +that the methods proposed for finding the set of disease causing +genes could be assessed in terms of their accuracy which a perfect +algorithm would find. along with accuracy, the time complexity +of the method is also important, as high time complexities would +limit the number of pathways that could be found within a +pragmatic time interval. +methods and results: here, the problem has been tackled by +integrating graph theoretical approaches with an approximation +algorithm. the problem of inferring disease causing genes and +their pathways has been transformed to a graph theoretical +problem. graph pruning techniques have been applied to get +the results in practical time. then, randomized rounding, an +efficient approach to design an approximation algorithm, has +been applied to fetch the most relevant causal genes and pathways. experimentation on multiple benchmark datasets has been +demonstrated more accurate and computationally time efficient +results than existing algorithms. also, biological relevance of these +results has been analyzed. +conclusions: based on computational approaches on biological +data, the sets of disease causing genes and corresponding pathways are identified for multiple disease cases. the proposed +approach would have a remarkable contribution in areas like +drug development and gene therapy, if we could recognize these +results biologically too. +index terms—molecular interaction network; causal genes; +dysregulated pathway; graph pruning; approximation algorithm; randomized rounding",8 +"abstract +in this paper we develop operational calculus on programming spaces that generalizes existing approaches to automatic differentiation of computer programs and provides a rigorous +framework for program analysis through calculus. +we present an abstract computing machine that models automatically differentiable +computer programs. computer programs are viewed as maps on a finite dimensional vector +space called virtual memory space, which we extend by the tensor algebra of its dual to +accommodate derivatives. the extended virtual memory is by itself an algebra of programs, +a data structure one can calculate with, and its elements give the expansion of the original +program as an infinite tensor series at program’s input values. we define the operator of +differentiation on programming spaces and implement a generalized shift operator in terms +of its powers. our approach offers a powerful tool for program analysis and approximation, +and provides deep learning with a formal calculus. +such a calculus connects general programs with deep learning, through operators that +map both formulations to the same space. this equivalence enables a generalization of +the existing methods for neural analysis to any computer program, and vice versa. several +applications are presented, most notably a meaningful way of neural network initialization +that leads to a process of program boosting. +keywords: programming spaces, operational calculus, deep learning, neural networks, +differentiable programs, tensor calculus, program analysis",9 +"abstract +a mobile wireless delay-tolerant network (dtn) model is proposed and analyzed, in which infinitely many nodes +are initially placed on r2 according to a uniform poisson point process (ppp) and subsequently travel, independently +of each other, along trajectories comprised of line segments, changing travel direction at time instances that form +a poisson process, each time selecting a new travel direction from an arbitrary distribution; all nodes maintain +constant speed. a single information packet is traveling towards a given direction using both wireless transmissions +and sojourns on node buffers, according to a member of a broad class of possible routing rules. for this model, +we compute the long-term averages of the speed with which the packet travels towards its destination and the rate +with which the wireless transmission cost accumulates. because of the complexity of the problem, we employ two +intuitive, simplifying approximations; simulations verify that the approximation error is typically small. our results +quantify the fundamental trade-off that exists in mobile wireless dtns between the packet speed and the packet +delivery cost. the framework developed here is both general and versatile, and can be used as a starting point for +further investigation1. +index terms +delay-tolerant network (dtn), geographic routing, information propagation speed, mobile wireless network.",7 +"abstract +this is a survey on the programming languages: c++, javascript, aspectj, c#, haskell, java, +php, scala, scheme, and bpel. our survey work involves a comparative study of these ten +programming languages with respect to the following criteria: secure programming practices, web +application development, web service composition, oop-based abstractions, reflection, aspect +orientation, functional programming, declarative programming, batch scripting, and ui +prototyping. we study these languages in the context of the above mentioned criteria and the level of +support they provide for each one of them. +keywords: programming languages, programming paradigms, language features, language design +and implementation",6 +"abstract. for any non-zero finite module m of finite projective dimension +over a noetherian local ring r with maximal ideal m and residue field k, it +is proved that the natural map extr (k, m ) → extr (k, m/mm ) is non-zero +when r is regular and is zero otherwise. a noteworthy aspect of the proof +is the use of stable cohomology. applications include computations of bass +series over certain local rings.",0 +"abstract +fuzzy controllers are efficient and interpretable system controllers for continuous state and action spaces. to date, such +controllers have been constructed manually or trained automatically either using expert-generated problem-specific cost +functions or incorporating detailed knowledge about the optimal control strategy. both requirements for automatic +training processes are not found in most real-world reinforcement learning (rl) problems. in such applications, online +learning is often prohibited for safety reasons because it requires exploration of the problem’s dynamics during policy +training. we introduce a fuzzy particle swarm reinforcement learning (fpsrl) approach that can construct fuzzy rl +policies solely by training parameters on world models that simulate real system dynamics. these world models are +created by employing an autonomous machine learning technique that uses previously generated transition samples of +a real system. to the best of our knowledge, this approach is the first to relate self-organizing fuzzy controllers to +model-based batch rl. fpsrl is intended to solve problems in domains where online learning is prohibited, system +dynamics are relatively easy to model from previously generated default policy transition samples, and it is expected that +a relatively easily interpretable control policy exists. the efficiency of the proposed approach with problems from such +domains is demonstrated using three standard rl benchmarks, i.e., mountain car, cart-pole balancing, and cart-pole +swing-up. our experimental results demonstrate high-performing, interpretable fuzzy policies. +keywords: interpretable, reinforcement learning, fuzzy policy, fuzzy controller, particle swarm optimization +1. introduction +this work is motivated by typical industrial application scenarios. complex industrial plants, like wind or gas +turbines, have already been operated in the field for years. +for these plants, low-level control is realized by dedicated +expert-designed controllers, which guarantee safety and +stability. such low-level controllers are constructed with +respect to the plant’s subsystem dependencies which can +be modeled by expert knowledge and complex mathematical abstractions, such as first principle models and finite element methods. examples for low-level controllers include +self-organizing fuzzy controllers, which are considered to +be efficient and interpretable (casillas, cordon, herrera, +∗ corresponding",3 +"abstract +this work investigates the consensus problem for multi-agent nonlinear systems +through the distributed real-time nonlinear receding horizon control methodology. with this work, we develop a scheme to reach the consensus for nonlinear +multi agent systems under fixed directed/undirected graph(s) without the need +of any linearization techniques. for this purpose, the problem of consensus +is converted into an optimization problem and is directly solved by the backwards sweep riccati method to generate the control protocol which results in a +non-iterative algorithm. stability analysis is conducted to provide convergence +guarantees of proposed scheme. in addition, an extension to the leader-following +consensus of nonlinear multi-agent systems is presented. several examples are +provided to validate and demonstrate the effectiveness of the presented scheme +and the corresponding theoretical results. +keywords: multi-agent consensus problems, leader-following consensus +problems, nonlinear receding horizon control, real-time optimization",3 +"abstract. we consider a classical k-center problem in trees. let t be a tree of n vertices and every +vertex has a nonnegative weight. the problem is to find k centers on the edges of t such that the +maximum weighted distance from all vertices to their closest centers is minimized. megiddo and tamir +(siam j. comput., 1983) gave an algorithm that can solve the problem in o(n log2 n) time by using +cole’s parametric search. since then it has been open for over three decades whether the problem can +be solved in o(n log n) time. in this paper, we present an o(n log n) time algorithm for the problem +and thus settle the open problem affirmatively.",8 +abstract,2 +"abstract— in this paper, we investigate the properties of an +improved swing equation model for synchronous generators. +this model is derived by omitting the main simplifying assumption of the conventional swing equation, and requires a novel +analysis for the stability and frequency regulation. we consider +two scenarios. first we study the case that a synchronous +generator is connected to a constant load. second, we inspect +the case of the single machine connected to an infinite bus. +simulations verify the results.",3 +"abstract. fuzzing consists of repeatedly testing an application with +modified, or fuzzed, inputs with the goal of finding security vulnerabilities in input-parsing code. in this paper, we show how to automate the +generation of an input grammar suitable for input fuzzing using sample inputs and neural-network-based statistical machine-learning techniques. we present a detailed case study with a complex input format, +namely pdf, and a large complex security-critical parser for this format, +namely, the pdf parser embedded in microsoft’s new edge browser. +we discuss (and measure) the tension between conflicting learning and +fuzzing goals: learning wants to capture the structure of well-formed inputs, while fuzzing wants to break that structure in order to cover unexpected code paths and find bugs. we also present a new algorithm for this +learn&fuzz challenge which uses a learnt input probability distribution +to intelligently guide where to fuzz inputs.",6 +"abstract. we consider the makespan minimization coupled-tasks problem in presence of compatibility constraints with a specified topology. +in particular, we focus on stretched coupled-tasks, i.e. coupled-tasks +having the same sub-tasks execution time and idle time duration. we +study several problems in framework of classic complexity and approximation for which the compatibility graph is bipartite (star, chain, . . .). +in such a context, we design some efficient polynomial-time approximation algorithms for an intractable scheduling problem according to +some parameters.",8 +"abstract +to each of the symmetry groups of the platonic solids we adjoin a carefully designed +involution yielding topological generators of pu (2) which have optimal covering properties +as well as efficient navigation. these are a consequence of optimal strong approximation +for integral quadratic forms associated with certain special quaternion algebras and their +arithmetic groups. the generators give super efficient 1-qubit quantum gates and are +natural building blocks for the design of universal quantum gates.",4 +"abstract +deep neural network (dnn) models have recently obtained state-of-the-art prediction accuracy for +the transcription factor binding (tfbs) site classification task. however, it remains unclear how +these approaches identify meaningful dna sequence signals and give insights as to why tfs bind +to certain locations. in this paper, we propose a toolkit called the deep motif dashboard (demo +dashboard) which provides a suite of visualization strategies to extract motifs, or sequence patterns +from deep neural network models for tfbs classification. we demonstrate how to visualize and +understand three important dnn models: convolutional, recurrent, and convolutional-recurrent +networks. our first visualization method is finding a test sequence’s saliency map which uses +first-order derivatives to describe the importance of each nucleotide in making the final prediction. +second, considering recurrent models make predictions in a temporal manner (from one end of a +tfbs sequence to the other), we introduce temporal output scores, indicating the prediction score +of a model over time for a sequential input. lastly, a class-specific visualization strategy finds the +optimal input sequence for a given tfbs positive class via stochastic gradient optimization. our +experimental results indicate that a convolutional-recurrent architecture performs the best among +the three architectures. the visualization techniques indicate that cnn-rnn makes predictions by +modeling both motifs as well as dependencies among them.",9 +"abstract +for lengths 64 and 66, we construct extremal singly even self-dual +codes with weight enumerators for which no extremal singly even selfdual codes were previously known to exist. we also construct new +40 inequivalent extremal doubly even self-dual [64, 32, 12] codes with +covering radius 12 meeting the delsarte bound.",7 +abstract,6 +"abstract +we describe a question answering model that +applies to both images and structured knowledge bases. the model uses natural language strings to automatically assemble neural networks from a collection of composable +modules. parameters for these modules are +learned jointly with network-assembly parameters via reinforcement learning, with only +(world, question, answer) triples as supervision. our approach, which we term a dynamic +neural module network, achieves state-of-theart results on benchmark datasets in both visual and structured domains.",9 +"abstract. grammars written as constraint handling rules (chr) can +be executed as efficient and robust bottom-up parsers that provide a +straightforward, non-backtracking treatment of ambiguity. abduction +with integrity constraints as well as other dynamic hypothesis generation techniques fit naturally into such grammars and are exemplified +for anaphora resolution, coordination and text interpretation.",6 +"abstract +referential integrity (ri) is an important correctness property of a +shared, distributed object storage system. it is sometimes thought +that enforcing ri requires a strong form of consistency. in this paper, we argue that causal consistency suffices to maintain ri. we +support this argument with pseudocode for a reference crdt data +type that maintains ri under causal consistency. quickcheck has +not found any errors in the model.",8 +"abstract +the presence of behind-the-meter rooftop pv and storage in the residential sector is poised +to increase significantly. here we quantify in detail the value of these technologies to consumers +and service providers. we characterize the heterogeneity in household electricity cost savings +under time-varying prices due to consumption behavior differences. the top 15% of consumers +benefit two to three times as much as the remaining 85%. different pricing policies do not +significantly alter how households fare with respect to one another. we define the value of +information as the financial value of improved forecasting capabilities for a household. the +typical value of information is 3.5 cents per hour per kwh reduction of standard deviation of +forecast error. coordination services that combine the resources available at all households can +reduce costs by an additional 15% to 30% of the original total cost. surprisingly, on the basis of +coordinated action alone, service providers will not encourage adoption beyond 50% within a +group. coordinated information, however, enables the providers to generate additional value +with increasing adoption.",3 +"abstract +we consider inference in the scalar diffusion model dxt = b(xt ) dt + σ(xt ) dwt with +discrete data (xj∆n )0≤j≤n , n → ∞, ∆n → 0 and periodic coefficients. for σ given, we +prove a general theorem detailing conditions under which bayesian posteriors will contract +in l2 –distance around the true drift function b0 at the frequentist minimax rate (up to +logarithmic factors) over besov smoothness classes. we exhibit natural nonparametric priors +which satisfy our conditions. our results show that the bayesian method adapts both to an +unknown sampling regime and to unknown smoothness.",10 +"abstract +expectation maximization (em) has recently been shown to be an efficient algorithm +for learning finite-state controllers (fscs) in large decentralized pomdps (dec-pomdps). +however, current methods use fixed-size fscs and often converge to maxima that are far +from optimal. this paper considers a variable-size fsc to represent the local policy of each +agent. these variable-size fscs are constructed using a stick-breaking prior, leading to a new +framework called decentralized stick-breaking policy representation (dec-sbpr). this approach +learns the controller parameters with a variational bayesian algorithm without having to assume +that the dec-pomdp model is available. the performance of dec-sbpr is demonstrated +on several benchmark problems, showing that the algorithm scales to large problems while +outperforming other state-of-the-art methods.",3 +"abstract +deep learning methods have shown great promise in many practical applications, +ranging from speech recognition, visual object recognition, to text processing. +however, most of the current deep learning methods suffer from scalability problems for large-scale applications, forcing researchers or users to focus on smallscale problems with fewer parameters. +in this paper, we consider a well-known machine learning model, deep belief networks (dbns) that have yielded impressive classification performance on a large +number of benchmark machine learning tasks. to scale up dbn, we propose an +approach that can use the computing clusters in a distributed environment to train +large models, while the dense matrix computations within a single machine are +sped up using graphics processors (gpu). when training a dbn, each machine +randomly drops out a portion of neurons in each hidden layer, for each training +case, making the remaining neurons only learn to detect features that are generally +helpful for producing the correct answer. within our approach, we have developed +four methods to combine outcomes from each machine to form a unified model. +our preliminary experiment on the mnist handwritten digit database demonstrates that our approach outperforms the state of the art test error rate.",9 +"abstract +a shortcoming of existing reachability approaches for nonlinear systems is the poor scalability +with the number of continuous state variables. to mitigate this problem we present a simulationbased approach where we first sample a number of trajectories of the system and next establish +bounds on the convergence or divergence between the samples and neighboring trajectories. we +compute these bounds using contraction theory and reduce the conservatism by partitioning the +state vector into several components and analyzing contraction properties separately in each +direction. among other benefits this allows us to analyze the effect of constant but uncertain +parameters by treating them as state variables and partitioning them into a separate direction. +we next present a numerical procedure to search for weighted norms that yield a prescribed +contraction rate, which can be incorporated in the reachability algorithm to adjust the weights +to minimize the growth of the reachable set.",3 +"abstract: one of the most recent architectures of networks is software-defined networks (sdns) using a controller appliance to control the set of switches on the network. the controlling process includes installing or +uninstalling packet-processing rules on flow tables of switches. +this paper presents a high-level imperative network programming language, called imnet, to facilitate writing +efficient, yet simple, programs executed by controller to manage switches. imnet is simply-structured, expressive, +compositional, and imperative. this paper also introduces an operational semantics to imnet. detailed examples +of programs (with their operational semantics) constructed in imnet are illustrated in the paper as well. +key–words: network programming languages, controller-switch architecture, operational semantics, syntax, imnet.",6 +"abstract—advances in de novo synthesis of dna and computational gene design methods make possible the customization of +genes by direct manipulation of features such as codon bias and mrna secondary structure. codon context is another feature +significantly affecting mrna translational efficiency, but existing methods and tools for evaluating and designing novel optimized +protein coding sequences utilize untested heuristics and do not provide quantifiable guarantees on design quality. in this study +we examine statistical properties of codon context measures in an effort to better understand the phenomenon. we analyze the +computational complexity of codon context optimization and design exact and efficient heuristic gene recoding algorithms under +reasonable constraint models. we also present a web-based tool for evaluating codon context bias in the appropriate context. +index terms— computational biology, dynamic programming, simulated annealing, synthetic biology",5 +"abstract +initial population plays an important role in heuristic algorithms such as ga as it help to decrease +the time those algorithms need to achieve an acceptable result. furthermore, it may influence the +quality of the final answer given by evolutionary algorithms. in this paper, we shall introduce a +heuristic method to generate a target based initial population which possess two mentioned +characteristics. the efficiency of the proposed method has been shown by presenting the results of +our tests on the benchmarks.",9 +"abstract. given a positive integer κ, we investigate the class of numerical semigroups verifying the property that every two subsequent non gaps, smaller than the +conductor, are spaced by at least κ. these semigroups will be called κ-sparse and +generalize the concept of sparse numerical semigroups.",0 +"abstract +solvency games, introduced by berger et al., provide an abstract framework for modelling +decisions of a risk-averse investor, whose goal is to avoid ever going broke. we study a new +variant of this model, where, in addition to stochastic environment and fixed increments and +decrements to the investor’s wealth, we introduce interest, which is earned or paid on the current +level of savings or debt, respectively. +we study problems related to the minimum initial wealth sufficient to avoid bankruptcy +(i.e. steady decrease of the wealth) with probability at least p. we present an exponential time +algorithm which approximates this minimum initial wealth, and show that a polynomial time +approximation is not possible unless p = np. for the qualitative case, i.e. p = 1, we show +that the problem whether a given number is larger than or equal to the minimum initial wealth +belongs to np ∩ conp, and show that a polynomial time algorithm would yield a polynomial +time algorithm for mean-payoff games, existence of which is a longstanding open problem. we +also identify some classes of solvency mdps for which this problem is in p. in all above cases the +algorithms also give corresponding bankruptcy avoiding strategies. +1998 acm subject classification g.3 probability and statistics. +keywords and phrases markov decision processes, algorithms, complexity, market models.",5 +"abstract +new vision sensors, such as the dynamic and active-pixel vision sensor (davis), incorporate a conventional globalshutter camera and an event-based sensor in the same pixel array. these sensors have great potential for high-speed +robotics and computer vision because they allow us to combine the benefits of conventional cameras with those of +event-based sensors: low latency, high temporal resolution, and very high dynamic range. however, new algorithms +are required to exploit the sensor characteristics and cope with its unconventional output, which consists of a stream of +asynchronous brightness changes (called “events”) and synchronous grayscale frames. for this purpose, we present +and release a collection of datasets captured with a davis in a variety of synthetic and real environments, which we +hope will motivate research on new algorithms for high-speed and high-dynamic-range robotics and computer-vision +applications. in addition to global-shutter intensity images and asynchronous events, we provide inertial measurements +and ground-truth camera poses from a motion-capture system. the latter allows comparing the pose accuracy of egomotion estimation algorithms quantitatively. all the data are released both as standard text files and binary files (i.e., +rosbag). this paper provides an overview of the available data and describes a simulator that we release open-source +to create synthetic event-camera data. +keywords +event-based cameras, visual odometry, slam, simulation",1 +"abstract +health related social media mining is a valuable apparatus for the early recognition of the diverse antagonistic +medicinal conditions. mostly, the existing methods are based on machine learning with knowledge-based learning. +this working note presents the recurrent neural network (rnn) and long short-term memory (lstm) based embedding for automatic health text classification in the social media mining. for each task, two systems are built and that +classify the tweet at the tweet level. rnn and lstm are used for extracting features and non-linear activation function at the last layer facilitates to distinguish the tweets of different categories. the experiments are conducted on 2nd +social media mining for health applications shared task at amia 2017. the experiment results are considerable; +however the proposed method is appropriate for the health text classification. this is primarily due to the reason that, +it doesn’t rely on any feature engineering mechanisms. +introduction +with the expansion of micro blogging platforms such as twitter, the internet is progressively being utilized to spread +health information instead of similarly as a wellspring of data1, 2 . twitter allows users to share their status messages +typically called as tweets, restricted to 140 characters. most of the time, these tweets expresses the opinions about the +topics. thus analysis of tweets has been considered as a significant task in many of the applications, here for health +related applications. +health text classification is taken into account a special case of text classification. the existing methods have used +machine learning methods with feature engineering. most commonly used features are n-grams, parts-of-speech tags, +term frequency-inverse document frequency, semantic features such as mentions of chemical substance and disease, +wordnet synsets, adverse drug reaction lexicon, etc3–6, 16 . in6, 7 proposed ensemble based approach for classifying the +adverse drug reactions tweets. +recently, the deep learning methods have performed well8 and used in many tasks mainly due to that it doesn’t rely on +any feature engineering mechanism. however, the performance of deep learning methods implicitly relies on the large +amount of raw data sets. to make use of unlabeled data,9 proposed semi-supervised approach based on convolutional +neural network for adverse drug event detection. though the data sets of task 1 and task 2 are limited, this paper +proposes rnn and lstm based embedding method. +background and hyper parameter selection +this section discusses the concepts of tweet representation and deep learning algorithms particularly recurrent neural +network (rnn) and long short-term memory (lstm) in a mathematical way. +tweet representation +representation of tweets typically called as tweet encoding. this contains two steps. the tweets are tokenized to +words during the first step. moreover, all words are transformed to lower-case. in second step, a dictionary is formed +by assigning a unique key for each word in a tweet. the unknown words in a tweet are assigned to default key 0. +to retain the word order in a tweet, each word is replaced by a unique number according to a dictionary. each tweet +vector sequence is made to same length by choosing the particular length. the tweet sequences that are too long than +the particular length are discarded and too short are padded by zeros. this type of word vector representation is passed +as input to the word embedding layer. for task 1, the maximum tweet sequence length is 35. thus the train matrix",2 +"abstract +in this paper, we address the dataset scarcity issue with the +hyperspectral image classification. as only a few thousands +of pixels are available for training, it is difficult to effectively +learn high-capacity convolutional neural networks (cnns). +to cope with this problem, we propose a novel cross-domain +cnn containing the shared parameters which can co-learn +across multiple hyperspectral datasets. the network also contains the non-shared portions designed to handle the datasetspecific spectral characteristics and the associated classification tasks. our approach is the first attempt to learn a cnn +for multiple hyperspectral datasets, in an end-to-end fashion. +moreover, we have experimentally shown that the proposed +network trained on three of the widely used datasets outperform all the baseline networks which are trained on single +dataset. +index terms— hyperspectral image classification, convolutional neural network (cnn), shared network, cross domain, domain adaptation +1. introduction +the introduction of convolutional neural network (cnn) has +brought forth unprecedented performance increase for classification problems in many different domains including rgb, +rgbd, and hyperspectral images. [1, 2, 3, 4] such performance increase was made possible due to the ability of cnn +being able to learn and express the deep and wide connection +between the input and the output using a huge number of parameters. in order to learn such a huge set of parameters, having a large scale dataset has become a significant requirement. +when the size of the given dataset is insufficient to learn a +network, one may consider using a larger external dataset to +better learn the large set of parameters. for instance, girshick +et al. [2] introduced a domain adaptation approach where the +network is trained on a large scale source domain (imagenet +dataset) and then finetuned on a target domain (object detection dataset). +when applying cnn to hyperspectral image classification problem, we also face the similar issue as there are no",1 +"abstract syntax tree (ast), consisting of syntax nodes (corresponding to nonterminals in the programming language’s grammar) and syntax tokens (corresponding to terminals). +we label syntax nodes with the name of the nonterminal from the program’s grammar, whereas +syntax tokens are labeled with the string that they represent. we use child edges to connect nodes +according to the ast. as this does not induce an order on children of a syntax node, we additionally +add nexttoken edges connecting each syntax token to its successor. an example of this is shown in +fig. 2a. +to capture the flow of control and data through a program, we add additional edges connecting +different uses and updates of syntax tokens corresponding to variables. for such a token v, let dr (v) +be the set of syntax tokens at which the variable could have been used last. this set may contain +several nodes (for example, when using a variable after a conditional in which it was used in both +branches), and even syntax tokens that follow in the program code (in the case of loops). similarly, +let dw (v) be the set of syntax tokens at which the variable was last written to. using these, we +add lastread (resp. lastwrite) edges connecting v to all elements of dr (v) (resp. dw (v)). +additionally, whenever we observe an assignment v = expr , we connect v to all variable tokens +occurring in expr using computedfrom edges. an example of such semantic edges is shown in +fig. 2b. +we extend the graph to chain all uses of the same variable using lastlexicaluse edges (independent +of data flow, i.e., in if (...) { ... v ...} else { ... v ...}, we link the two occurrences of v). we also connect return tokens to the method declaration using returnsto edges +(this creates a “shortcut” to its name and type). inspired by rice et al. (2017), we connect arguments +in method calls to the formal parameters that they are matched to with formalargname edges, +i.e., if we observe a call foo(bar) and a method declaration foo(inputstream stream), +we connect the bar token to the stream token. finally, we connect every token corresponding +to a variable to enclosing guard expressions that use the variable with guardedby and guardedbynegation edges. for example, in if (x > y) { ... x ...} else { ... y ...}, +we add a guardedby edge from x (resp. a guardedbynegation edge from y) to the ast node +corresponding to x > y. +finally, for all types of edges we introduce their respective backwards edges (transposing the +adjacency matrix), doubling the number of edges and edge types. backwards edges help with +propagating information faster across the ggnn and make the model more expressive. +leveraging variable type information we assume a statically typed language and that the +source code can be compiled, and thus each variable has a (known) type τ (v). to use it, we define +a learnable embedding function r(τ ) for known types and additionally define an “u nk t ype” for +all unknown/unrepresented types. we also leverage the rich type hierarchy that is available in many +object-oriented languages. for this, we map a variable’s type τ (v) to the set of its supertypes, i.e. +τ ∗ (v) = {τ : τ (v) implements type τ } ∪ {τ (v)}. we then compute the type representation r∗ (v) +used for state updates and the number of propagation steps per ggnn layer is fixed to 1. instead, several layers +are used. in our experiments, gcns generalized less well than ggnns.",2 +"abstract—wireless fading channels suffer from both channel +fadings and additive white gaussian noise (awgn). as a result, +it is impossible for fading channels to support a constant rate data +stream without using buffers. in this paper, we consider information transmission over an infinite-buffer-aided block rayleigh +fading channel in the low signal-to-noise ratio (snr) regime. we +characterize the transmission capability of the channel in terms of +stationary queue length distribution, packet delay, as well as data +rate. based on the memoryless property of the service provided by +the channel in each block, we formulate the transmission process +as a discrete time discrete state d/g/1 queueing problem. the +obtained results provide a full characterization of block rayleigh +fading channels and can be extended to the finite-buffer-aided +transmissions. +index terms—block rayleigh fading channel, buffer-aided +communication, queueing analysis, queue length distribution, +packet delay.",7 +"abstract +the millimeter wave spectra at 71-76ghz (70ghz) and 81-86ghz (80ghz) have the potential +to endow fifth-generation new radio (5g-nr) with mobile connectivity at gigabit rates. however, a +pressing issue is the presence of incumbent systems in these bands, which are primarily point-topoint fixed stations (fss). in this paper, we first identify the key properties of incumbents by parsing +databases of existing stations in major cities to devise several modeling guidelines and characterize their +deployment geometry and antenna specifications. second, we develop a detailed interference framework +to compute the aggregate interference from outdoor 5g-nr users into fss. we then present several case +studies in dense populated areas, using actual incumbent databases and building layouts. our simulation +results demonstrate promising 5g coexistence at 70ghz and 80ghz as the majority of fss experience +interference well below the noise floor thanks to the propagation losses in these bands and the deployment +geometry of the incumbent and 5g systems. for the few fss that may incur higher interference, we +propose several passive interference mitigation techniques such as angular-based exclusion zones and +spatial power control. simulations results show that the techniques can effectively protect fss, without +tangible degradation of the 5g coverage.",7 +"abstract. quantum machine learning witnesses an increasing amount of quantum +algorithms for data-driven decision making, a problem with potential applications +ranging from automated image recognition to medical diagnosis. many of those +algorithms are implementations of quantum classifiers, or models for the classification +of data inputs with a quantum computer. following the success of collective decision +making with ensembles in classical machine learning, this paper introduces the concept +of quantum ensembles of quantum classifiers. creating the ensemble corresponds +to a state preparation routine, after which the quantum classifiers are evaluated in +parallel and their combined decision is accessed by a single-qubit measurement. this +framework naturally allows for exponentially large ensembles in which – similar to +bayesian learning – the individual classifiers do not have to be trained. as an example, +we analyse an exponentially large quantum ensemble in which each classifier is weighed +according to its performance in classifying the training data, leading to new results for +quantum as well as classical machine learning.",10 +"abstract +crowdsourced 3d cad models are becoming easily accessible online, and can potentially generate an infinite +number of training images for almost any object category. +we show that augmenting the training data of contemporary +deep convolutional neural net (dcnn) models with such +synthetic data can be effective, especially when real training data is limited or not well matched to the target domain. +most freely available cad models capture 3d shape but are +often missing other low level cues, such as realistic object +texture, pose, or background. in a detailed analysis, we +use synthetic cad-rendered images to probe the ability of +dcnn to learn without these cues, with surprising findings. +in particular, we show that when the dcnn is fine-tuned +on the target detection task, it exhibits a large degree of invariance to missing low-level cues, but, when pretrained on +generic imagenet classification, it learns better when the +low-level cues are simulated. we show that our synthetic +dcnn training approach significantly outperforms previous methods on the pascal voc2007 dataset when learning in the few-shot scenario and improves performance in a +domain shift scenario on the office benchmark.",9 +"abstract. we show that a countable group is locally virtually cyclic if and only if its +bredon cohomological dimension for the family of virtually cyclic subgroups is at most one.",4 +"abstract +an adaptive system for the suppression of vibration transmission using a single piezoelectric +actuator shunted by a negative capacitance circuit is presented. it is known that using negative +capacitance shunt, the spring constant of piezoelectric actuator can be controlled to extreme values +of zero or infinity. since the value of spring constant controls a force transmitted through an elastic +element, it is possible to achieve a reduction of transmissibility of vibrations through a piezoelectric +actuator by reducing its effective spring constant. the narrow frequency range and broad frequency +range vibration isolation systems are analyzed, modeled, and experimentally investigated. the +problem of high sensitivity of the vibration control system to varying operational conditions is +resolved by applying an adaptive control to the circuit parameters of the negative capacitor. a +control law that is based on the estimation of the value of effective spring constant of shunted +piezoelectric actuator is presented. an adaptive system, which achieves a self-adjustment of the +negative capacitor parameters is presented. it is shown that such an arrangement allows a design +of a simple electronic system, which, however, offers a great vibration isolation efficiency in variable +vibration conditions. +keywords: piezoelectric actuator, vibration transmission suppression, piezoelectric shunt damping, negative capacitor, elastic stiffness control, adaptive device",5 +"abstract +graphs provide a powerful means for representing complex +interactions between entities. recently, new deep learning approaches have emerged for representing and modeling graphstructured data while the conventional deep learning methods, +such as convolutional neural networks and recurrent neural +networks, have mainly focused on the grid-structured inputs +of image and audio. leveraged by representation learning capabilities, deep learning-based techniques can detect structural characteristics of graphs, giving promising results for +graph applications. in this paper, we attempt to advance deep +learning for graph-structured data by incorporating another +component: transfer learning. by transferring the intrinsic geometric information learned in the source domain, our approach can construct a model for a new but related task in +the target domain without collecting new data and without +training a new model from scratch. we thoroughly tested our +approach with large-scale real-world text data and confirmed +the effectiveness of the proposed transfer learning framework +for deep learning on graphs. according to our experiments, +transfer learning is most effective when the source and target domains bear a high level of structural similarity in their +graph representations.",9 +"abstract +in survival analysis it often happens that some subjects under study do not experience the event of interest; they are considered to be ‘cured’. the population is thus +a mixture of two subpopulations : the one of cured subjects, and the one of ‘susceptible’ subjects. when covariates are present, a so-called mixture cure model can be +used to model the conditional survival function of the population. it depends on two +components : the probability of being cured and the conditional survival function of +the susceptible subjects. +in this paper we propose a novel approach to estimate a mixture cure model when +the data are subject to random right censoring. we work with a parametric model for +the cure proportion (like e.g. a logistic model), while the conditional survival function +of the uncured subjects is unspecified. the approach is based on an inversion which +allows to write the survival function as a function of the distribution of the observable +random variables. this leads to a very general class of models, which allows a flexible +and rich modeling of the conditional survival function. we show the identifiability of +the proposed model, as well as the weak consistency and the asymptotic normality of +the model parameters. we also consider in more detail the case where kernel estimators +are used for the nonparametric part of the model. the new estimators are compared +with the estimators from a cox mixture cure model via finite sample simulations. +finally, we apply the new model and estimation procedure on two medical data sets.",10 +"abstract +we prove that orthogonal constructor term rewrite systems and lambda-calculus with weak +(i.e., no reduction is allowed under the scope of a lambda-abstraction) call-by-value reduction can simulate each other with a linear overhead. in particular, weak call-by-value betareduction can be simulated by an orthogonal constructor term rewrite system in the same +number of reduction steps. conversely, each reduction in an term rewrite system can be +simulated by a constant number of beta-reduction steps. this is relevant to implicit computational complexity, because the number of beta steps to normal form is polynomially related +to the actual cost (that is, as performed on a turing machine) of normalization, under weak +call-by-value reduction. orthogonal constructor term rewrite systems and lambda-calculus +are thus both polynomially related to turing machines, taking as notion of cost their natural +parameters.",6 +"abstract +if every element of a matrix group is similar to a permutation matrix, +then it is called a permutation-like matrix group. references [4], [5] and +[6] showed that, if a permutation-like matrix group contains a maximal +cycle such that the maximal cycle generates a normal subgroup and the +length of the maximal cycle equals to a prime, or a square of a prime, +or a power of an odd prime, then the permutation-like matrix group is +similar to a permutation matrix group. in this paper, we prove that if +a permutation-like matrix group contains a maximal cycle such that the +maximal cycle generates a normal subgroup and the length of the maximal +cycle equals to any power of 2, then it is similar to a permutation matrix +group. +key words: permutation-like matrix group, permutation matrix group, +maximal cycle. +msc2010: 15a18, 15a30, 20h20.",4 +"abstract +the main problems of school course timetabling are time, curriculum, and classrooms. in addition there +are other problems that vary from one institution to another. this paper is intended to solve the problem of +satisfying the teachers’ preferred schedule in a way that regards the importance of the teacher to the +supervising institute, i.e. his score according to some criteria. genetic algorithm (ga) has been presented +as an elegant method in solving timetable problem (ttp) in order to produce solutions with no conflict. in +this paper, we consider the analytic hierarchy process (ahp) to efficiently obtain a score for each teacher, +and consequently produce a ga-based ttp solution that satisfies most of the teachers’ preferences.",9 +"abstract— over the years, data mining has attracted most of the +attention from the research community. the researchers attempt +to develop faster, more scalable algorithms to navigate over the +ever increasing volumes of spatial gene expression data in search +of meaningful patterns. association rules are a data mining +technique that tries to identify intrinsic patterns in spatial gene +expression data. it has been widely used in different applications, +a lot of algorithms introduced to discover these rules. however +priori-like algorithms has been used to find positive association +rules. in contrast to positive rules, negative rules encapsulate +relationship between the occurrences of one set of items with +absence of the other set of items. in this paper, an algorithm for +mining negative association rules from spatial gene expression +data is introduced. the algorithm intends to discover the negative +association rules which are complementary to the association +rules often generated by priori like algorithm. our study shows +that negative association rules can be discovered efficiently from +spatial gene expression data.",5 +"abstract—signal processing played an important role in improving the quality of communications over copper cables in +earlier dsl technologies. even more powerful signal processing +techniques are required to enable a gigabit per second data +rate in the upcoming g.fast standard. this new standard is +different from its predecessors in many respects. in particular, +g.fast will use a significantly higher bandwidth. at such a high +bandwidth, crosstalk between different lines in a binder will reach +unprecedented levels, which are beyond the capabilities of most +efficient techniques for interference mitigation. in this article, we +survey the state of the art and research challenges in the design of +signal processing algorithms for the g.fast system, with a focus on +novel research approaches and design considerations for efficient +interference mitigation in g.fast systems. we also detail relevant +vdsl techniques and points out their strengths and limitations +for the g.fast system.",7 +"abstract +this paper presents a model for the simulation of liquid-gas-solid flows by means of the +lattice boltzmann method. the approach is built upon previous works for the simulation of +liquid-solid particle suspensions on the one hand, and on a liquid-gas free surface model on +the other. we show how the two approaches can be unified by a novel set of dynamic cell +conversion rules. for evaluation, we concentrate on the rotational stability of non-spherical +rigid bodies floating on a plane water surface – a classical hydrostatic problem known from +naval architecture. we show the consistency of our method in this kind of flows and obtain +convergence towards the ideal solution for the measured heeling stability of a floating box. +1. introduction +since its establishment the lattice boltzmann method (lbm) has become a popular +alternative in the field of complex flow simulations [33]. its application to particle suspensions +has been propelled to a significant part by the works of ladd et al. [22, 23] and aidun et al. +[3, 4, 1]. based on the approach of the so-called momentum exchange method, it is possible to +calculate the hydromechanical stresses on the surface of fully resolved solid particles directly +from the lattice boltzmann boundary treatment. in this paper, the aforementioned fluidsolid coupling approach is extended to liquid-gas free surface flows, i.e., the problem of solid +bodies moving freely within a flow of two immiscible fluids. we use the free surface model +of [21, 30] to simulate a liquid phase in interaction with a gas by means of a volume of fluid +approach and a special kinematic free surface boundary condition. i.e., the interface of the +two phases is assumed sharp enough to be modeled by a locally defined boundary layer. +this boundary layer is updated dynamically according to the liquid advection by a set of +cell conversion rules. +this paper proposes a unification of the update rules of the free surface model with those +of the particulate flow model, which also requires a dynamical mapping of the respective solid +boundaries to the lattice boltzmann grid. as described in [5], the resulting scheme allows full +freedom of motion of the solid bodies in the flow, which can be calculated according to rigid +body physics as in [20]. we demonstrate the consistency of the combined liquid-gas-solid +method by means of a simple advection test with a floating body in a stratified liquid-gas +channel flow, and discuss the main source of error in the dynamic boundary handling with +particles in motion. +preprint submitted to elsevier",5 +"abstract syntax tree according to the provided +grammar and is implemented using recursive descent parsing techniques as described in [3]. the lexer is +a module used by this pass. errors and warnings reported by the first pass are only lexical and syntactic +errors. pass two traverses the syntax tree depth first, replacing all variables and constructs that are purely +elements of purl and are not expressible in the standard notation. these constructs will be discussed when +exploring the individual pattern elements. in the third pass, the syntax tree is again traversed depth first. +all verification occurs in this pass and errors indicate problems in the structure of the knitting pattern. a +global state object is used throughout parsing to track information necessary for error reporting, such as +section name, position in code, and row number in the generated pattern. it is also used in the verification +pass to track the pattern orientation, width, and row index, and to update nodes with these values as +necessary. +the reason for breaking up parsing into three passes is because a syntax tree representation of the +pattern is much easier to manipulate and verify. a main feature of purl is the ability to define modular +and parametrized segments of patterns, through the pattern sample construct introduced by this language +(see 2.15), so a second pass is used for trimming nodes representing sample calls. also, there are some +challenges in verifying a pattern. it is necessary that every row works all of the stitches of the previous +row, but there are some pattern constructs which work a number of stitches that depends on the width of +the current row. since we allow modular pattern definitions and parameterized segments of patterns, this +verification cannot be done in a single pass over the source language.",6 +"abstract +counting the number of permutations of a given total displacement is equivalent to counting +weighted motzkin paths of a given area (guay-paquet and petersen [10]). the former combinatorial problem is still open. in this work we show that this connection allows to construct +efficient algorithms for counting and for sampling such permutations. these algorithms provide +a tool to better understand the original combinatorial problem. a by-product of our approach +is a different way of counting based on certain “building sequences” for motzkin paths, which +may be of independent interest.",8 +"abstract +a promising technique to provide mobile applications with high computation resources is to offload +the processing task to the cloud. mobile cloud computing enables mobile devices with limited batteries +to run resource hungry applications with the help of abundant processing capabilities of the clouds and +to save power. however, it is not always true that cloud computing consumes less energy compared to +mobile edge computing. it may take more energy for the mobile device to transmit a file to the cloud +than running the task itself at the edge. this paper investigates the power minimization problem for the +mobile devices by data offloading in multi-cell multi-user ofdma mobile cloud computing networks. +we consider the maximum acceptable delay and tolerable interference as qos metrics to be satisfied +in our network. we formulate the problem as a mixed integer nonlinear problem which is converted +into a convex form using d.c. approximation. to solve the optimization problem, we have proposed +centralized and distributed algorithms for joint power allocation and channel assignment together with +decision making. our simulation results illustrate that by utilizing the proposed algorithms, considerable +power saving could be achieved e.g. about 60% for short delays and large bitstream sizes in comparison +with the baselines. +index terms +offloading, resource allocation, mobile cloud computing, mobile edge computing.",7 +"abstract. in this preliminary note, we will illustrate our ideas on automated mechanisms for termination and non-termination reasoning.",6 +"abstract: despite extensive research and remarkable advancements in the control of complex networks, +time-invariant control schedules (tics) still dominate +the literature. this is both due to their simplicity and +the fact that the potential benefits of time-varying control schedules (tvcs) have remained largely uncharacterized. yet, tvcs have the potential to significantly +enhance network controllability over tics, especially +when applied to large networks. in this paper we study +networks with linear and discrete-time dynamics and +analyze the role of network structure in tvcs. through +the analysis of a new scale-dependent notion of nodal +communicability, we show that optimal tvcs involves +the actuation of the most central nodes at appropriate spatial scales at all times. consequently, we show +that it is the scale-heterogeneity of the central-nodes +in a network that determine whether, and to what extent, tvcs outperforms conventional policies based on +tics. several analytical results and numerical examples support and illustrate this relationship.",3 +"abstract +let k be a nonperfect separably closed field. let g be a connected reductive algebraic group defined over k. we study rationality problems for serre’s notion of complete +reducibility of subgroups of g. in particular, we present a new example of subgroup h +of g of type d4 in characteristic 2 such that h is g-completely reducible but not gcompletely reducible over k (or vice versa). this is new: all known such examples are for +g of exceptional type. we also find a new counterexample for külshammer’s question on +representations of finite groups for g of type d4 . a problem concerning the number of +conjugacy classes is also considered. the notion of nonseparable subgroups plays a crucial +role in all our constructions.",4 +"abstract +an algebra has the howson property if the intersection of any two finitely generated subalgebras +is finitely generated. a simple necessary and sufficient condition is given for the howson property +to hold on an inverse semigroup with finitely many idempotents. in addition, it is shown that any +monogenic inverse semigroup has the howson property. +keywords: howson property; e-unitary; monogenic +mathematics subject classification: 20m18",4 +"abstract. the perfect phylogeny problem is a classic problem in computational biology, where we seek an unrooted phylogeny that is compatible with a set of qualitative characters. such a tree exists precisely +when an intersection graph associated with the character set, called the +partition intersection graph, can be triangulated using a restricted set +of fill edges. semple and steel used the partition intersection graph to +characterize when a character set has a unique perfect phylogeny. bordewich, huber, and semple showed how to use the partition intersection +graph to find a maximum compatible set of characters. in this paper, we +build on these results, characterizing when a unique perfect phylogeny +exists for a subset of partial characters. our characterization is stated in +terms of minimal triangulations of the partition intersection graph that +are uniquely representable, also known as ur-chordal graphs. our characterization is motivated by the structure of ur-chordal graphs, and the +fact that the block structure of minimal triangulations is mirrored in the +graph that has been triangulated.",5 +"abstract +we say that a polynomial automorphism φ in n variables is stably +co-tame if the tame subgroup in n variables is contained in the subgroup generated by φ and affine automorphisms in n + 1 variables. in +this paper, we give conditions for stably co-tameness of polynomial +automorphisms.",4 +"abstract +in this paper, we introduce a generalized value iteration network (gvin), which is an end-to-end neural network planning +module. gvin emulates the value iteration algorithm by using +a novel graph convolution operator, which enables gvin to +learn and plan on irregular spatial graphs. we propose three +novel differentiable kernels as graph convolution operators +and show that the embedding-based kernel achieves the best +performance. furthermore, we present episodic q-learning, +an improvement upon traditional n-step q-learning that stabilizes training for vin and gvin. lastly, we evaluate gvin +on planning problems in 2d mazes, irregular graphs, and realworld street networks, showing that gvin generalizes well +for both arbitrary graphs and unseen graphs of larger scale +and outperforms a naive generalization of vin (discretizing a +spatial graph into a 2d image).",2 +"abstract—this work proposes a new adaptive-robust control +(arc) architecture for a class of uncertain euler-lagrange (el) +systems where the upper bound of the uncertainty satisfies linear +in parameters (lip) structure. conventional arc strategies +either require structural knowledge of the system or presume +that the overall uncertainties or its time derivative is norm +bounded by a constant. due to unmodelled dynamics and +modelling imperfection, true structural knowledge of the system +is not always available. further, for the class of systems under +consideration, prior assumption regarding the uncertainties (or +its time derivative) being upper bounded by a constant, puts +a restriction on states beforehand. conventional arc laws +invite overestimation-underestimation problem of switching gain. +towards this front, adaptive switching-gain based robust control (asrc) is proposed which alleviates the overestimationunderestimation problem of switching gain. moreover, asrc +avoids any presumption of constant upper bound on the overall +uncertainties and can negotiate uncertainties regardless of being +linear or nonlinear in parameters. experimental results of +asrc using a wheeled mobile robot notes improved control +performance in comparison to adaptive sliding mode control. +index terms—adaptive-robust control, euler-lagrange systems, wheeled mobile robot, uncertainty.",3 +"abstract +python is a popular dynamic language with a large part of its +appeal coming from powerful libraries and extension modules. these augment the language and make it a productive +environment for a wide variety of tasks, ranging from web +development (django) to numerical analysis (numpy). +unfortunately, python’s performance is quite poor when +compared to modern implementations of languages such as +lua and javascript. why does python lag so far behind +these other languages? as we show, the very same api and +extension libraries that make python a powerful language +also make it very difficult to efficiently execute. +given that we want to retain access to the great extension +libraries that already exist for python, how fast can we make +it? to evaluate this, we designed and implemented falcon, a +high-performance bytecode interpreter fully compatible with +the standard cpython interpreter. falcon applies a number +of well known optimizations and introduces several new +techniques to speed up execution of python bytecode. in our +evaluation, we found falcon an average of 25% faster than +the standard python interpreter on most benchmarks and in +some cases about 2.5x faster.",6 +"abstract. this paper investigates how high school students approach computing +through an introductory computer science course situated in the logic programming (lp) paradigm. this study shows how novice students operate within the +lp paradigm while engaging in foundational computing concepts and skills, and +presents a case for lp as a viable paradigm choice for introductory cs courses. +keywords: cs education, high school cs, declarative programming, logic programming, answer set programming",2 +"abstract +we prove an explicit formula for the first non-zero entry in the n-th +row of the graded betti table of an n-dimensional projective toric variety +associated to a normal polytope with at least one interior lattice point. +this applies to veronese embeddings of pn . we also prove an explicit +formula for the entire n-th row when the interior of the polytope is onedimensional. all results are valid over an arbitrary field k.",0 +"abstract— as autonomous service robots become more affordable and thus available also for the general public, there +is a growing need for user friendly interfaces to control the +robotic system. currently available control modalities typically +expect users to be able to express their desire through either +touch, speech or gesture commands. while this requirement +is fulfilled for the majority of users, paralyzed users may not +be able to use such systems. in this paper, we present a novel +framework, that allows these users to interact with a robotic +service assistant in a closed-loop fashion, using only thoughts. +the brain-computer interface (bci) system is composed of +several interacting components, i.e., non-invasive neuronal signal recording and decoding, high-level task planning, motion +and manipulation planning as well as environment perception. +in various experiments, we demonstrate its applicability and +robustness in real world scenarios, considering fetch-and-carry +tasks and tasks involving human-robot interaction. as our +results demonstrate, our system is capable of adapting to +frequent changes in the environment and reliably completing +given tasks within a reasonable amount of time. combined with +high-level planning and autonomous robotic systems, interesting +new perspectives open up for non-invasive bci-based humanrobot interactions.",2 +abstract,2 +"abstraction +a directed graph g (v, e) is strongly connected if and only if, for any pair of vertices x and y from +v, there exists a path from x to y and a path from y to x. in computer science, the partition of a +graph in strongly connected components is represented by the partition of all vertices from the +graph, so that for any two vertices, x and y, from the same partition, there exists a path from x +to y and a path from y to x and for any two vertices, u and v, from different partition, the +property is not met. the algorithm presented below is meant to find the partition of a given graph +in strongly connected components in o (numberofnodes + numberofedges * log* +(numberofnodes)), where log* function stands for iterated logarithm.",8 +"abstract +first-order logic (fol) is widely regarded as one of the most important foundations for knowledge representation. nevertheless, in this paper, we argue that +fol has several critical issues for this purpose. instead, we propose an alternative called assertional logic, in which all syntactic objects are categorized as set +theoretic constructs including individuals, concepts and operators, and all kinds of +knowledge are formalized by equality assertions. we first present a primitive form +of assertional logic that uses minimal assumed knowledge and constructs. then, +we show how to extend it by definitions, which are special kinds of knowledge, +i.e., assertions. we argue that assertional logic, although simpler, is more expressive and extensible than fol. as a case study, we show how assertional logic can +be used to unify logic and probability, and more building blocks in ai.",2 +"abstract +this article is devoted to study the effects of the s-periodical fractional differencing filter (1 − ls )dt . to put this effect in evidence, we +have derived the periodic auto-covariance functions of two distinct univariate seasonally fractionally differenced periodic models. a multivariate +representation of periodically correlated process is exploited to provide +the exact and approximated expression auto-covariance of each models. +the distinction between the models is clearly obvious through the expression of periodic auto-covariance function. besides producing different +auto-covariance functions, the two models differ in their implications. +in the first model, the seasons of the multivariate series are separately +fractionally integrated. in the second model, however, the seasons for the +univariate series are fractionally co-integrated. on the simulated sample, +for each models, with the same parameters, the empirical periodic autocovariance are calculated and graphically represented for illustrating the +results and support the comparison between the two models.",10 +"abstract—cooperative adaptive cruise control (cacc) is +one of the driving applications of vehicular ad-hoc networks +(vanets) and promises to bring more efficient and faster +transportation through cooperative behavior between vehicles. +in cacc, vehicles exchange information, which is relied on to +partially automate driving; however, this reliance on cooperation +requires resilience against attacks and other forms of misbehavior. in this paper, we propose a rigorous attacker model +and an evaluation framework for this resilience by quantifying +the attack impact, providing the necessary tools to compare +controller resilience and attack effectiveness simultaneously. although there are significant differences between the resilience of +the three analyzed controllers, we show that each can be attacked +effectively and easily through either jamming or data injection. +our results suggest a combination of misbehavior detection +and resilient control algorithms with graceful degradation are +necessary ingredients for secure and safe platoons.",3 +"abstract. let g and g0 be two right-angled artin groups. we show they +are quasi-isometric if and only if they are isomorphic, under the assumption +that the outer automorphism groups out(g) and out(g0 ) are finite. if we +only assume out(g) is finite, then g0 is quasi-isometric g if and only if g0 +is isomorphic to a subgroup of finite index in g. in this case, we give an +algorithm to determine whether g and g0 are quasi-isometric by looking at +their defining graphs.",4 +"abstract +we study machine learning formulations of inductive program synthesis; that is, +given input-output examples, synthesize source code that maps inputs to corresponding outputs. our key contribution is t erpre t, a domain-specific language +for expressing program synthesis problems. a t erpre t model is composed of +a specification of a program representation and an interpreter that describes how +programs map inputs to outputs. the inference task is to observe a set of inputoutput examples and infer the underlying program. from a t erpre t model we +automatically perform inference using four different back-ends: gradient descent +(thus each t erpre t model can be seen as defining a differentiable interpreter), +linear program (lp) relaxations for graphical models, discrete satisfiability solving, +and the s ketch program synthesis system. t erpre t has two main benefits. first, +it enables rapid exploration of a range of domains, program representations, and +interpreter models. second, it separates the model specification from the inference +algorithm, allowing proper comparisons between different approaches to inference. +we illustrate the value of t erpre t by developing several interpreter models and +performing an extensive empirical comparison between alternative inference algorithms on a variety of program models. to our knowledge, this is the first work +to compare gradient-based search over program space to traditional search-based +alternatives. our key empirical finding is that constraint solvers dominate the gradient descent and lp-based formulations.",9 +"abstract +the automatic coding of clinical documentation according +to diagnosis codes is a useful task in the electronic health +record, but a challenging one due to the large number +of codes and the length of patient notes. we investigate +four models for assigning multiple icd codes to discharge +summaries, and experiment with data from the mimic ii +and iii clinical datasets. we present hierarchical attentionbidirectional gated recurrent unit (ha-gru), a hierarchical approach to tag a document by identifying the sentences +relevant for each label. ha-gru achieves state-of-the art results. furthermore, the learned sentence-level attention layer +highlights the model decision process, allows for easier error +analysis, and suggests future directions for improvement.",2 +"abstract. in this article, we introduce a procedure for selecting variables in principal components analysis. it is developed to identify a small +subset of the original variables that best explain the principal components through nonparametric relationships. there are usually some +noisy uninformative variables in a dataset, and some variables that are +strongly related to one another because of their general dependence. the +procedure is designed to be used following the satisfactory initial principal components analysis with all variables, and its aim is to help to +interpret the underlying structures. we analyze the asymptotic behavior +of the method and provide some examples.",10 +abstract,5 +"abstract. lyubeznik’s conjecture, ([ly1], remark 3.7) asserts the finiteness +of the set ssociated primes of local cohomology modules for regular rings. but, +in the case of ramified regular local ring, it is open. recently, in theorem 1.2 +of [nu], it is proved that in any noetherian regular local ring s and for a fixed +ideal j ⊂ s, associated primes of local cohomology hji (s) for i ≥ 0 is finite, if +it does not contain p. in this paper, we use this result to construct examples +of local cohomology modules for ramified regular local ring so that they have +finitely many associated primes.",0 +"abstract. we analyze stable homology over associative rings and obtain results over artin algebras and commutative noetherian rings. our study develops similarly for these classes; for simplicity we only discuss the latter here. +stable homology is a broad generalization of tate homology. vanishing of +stable homology detects classes of rings—among them gorenstein rings, the +original domain of tate homology. closely related to gorensteinness of rings is +auslander’s g-dimension for modules. we show that vanishing of stable homology detects modules of finite g-dimension. this is the first characterization +of such modules in terms of vanishing of (co)homology alone. +stable homology, like absolute homology, tor, is a theory in two variables. +it can be computed from a flat resolution of one module together with an +injective resolution of the other. this betrays that stable homology is not +balanced in the way tor is balanced. in fact, we prove that a ring is gorenstein +if and only if stable homology is balanced.",0 +abstract,5 +"abstract +it was recently proven that all free and many virtually free verbally closed subgroups are algebraically closed in any group. we establish sufficient conditions for a group that is an extension +of a free non-abelian group by a group satisfying a non-trivial law to be algebraically closed in any +group in which it is verbally closed. we apply these conditions to prove that the fundamental groups +of all closed surfaces, except the klein bottle, and almost all free products of groups satisfying a +non-trivial law are algebraically closed in any group in which they are verbally closed.",4 +abstract,8 +"abstract +this paper studies synchronization of dynamical networks with event-based communication. firstly, two estimators are +introduced into each node, one to estimate its own state, and the other to estimate the average state of its neighbours. then, +with these two estimators, a distributed event-triggering rule (etr) with a dwell time is designed such that the network +achieves synchronization asymptotically with no zeno behaviours. the designed etr only depends on the information that +each node can obtain, and thus can be implemented in a decentralized way. +key words: distributed event-triggered control, asymptotic synchronization, dynamical networks.",3 +abstract,3 +"abstract +this paper considers fully dynamic graph algorithms with both faster worst case update +time and sublinear space. the fully dynamic graph connectivity problem is the following: given +a graph on a fixed set of n nodes, process an online sequence of edge insertions, edge deletions, +and queries of the form “is there a path between nodes a and b?” in 2013, the first data structure +was presented with worst case time per operation which was polylogarithmic in n. in this paper, +we shave off a factor of log n from that time, to o(log 4 n) per update. for sequences which are +polynomial in length, our algorithm answers queries in o(log n/ log log n) time correctly with +high probability and using o(n log2 n) words (of size log n). this matches the amount of space +used by the most space-efficient graph connectivity streaming algorithm. we also show that +2-edge connectivity can be maintained using o(n log2 n) words with an amortized update time +of o(log6 n).",8 +"abstract— in this paper a binary feature based loop closure +detection (lcd) method is proposed, which for the first time +achieves higher precision-recall (pr) performance compared +with state-of-the-art sift feature based approaches. the proposed system originates from our previous work multi-index +hashing for loop closure detection (mild), which employs +multi-index hashing (mih) [1] for approximate nearest neighbor (ann) search of binary features. as the accuracy of mild +is limited by repeating textures and inaccurate image similarity +measurement, burstiness handling is introduced to solve this +problem and achieves considerable accuracy improvement. +additionally, a comprehensive theoretical analysis on mih +used in mild is conducted to further explore the potentials +of hashing methods for ann search of binary features from +probabilistic perspective. this analysis provides more freedom +on best parameter choosing in mih for different application +scenarios. experiments on popular public datasets show that +the proposed approach achieved the highest accuracy compared +with state-of-the-art while running at 30hz for databases +containing thousands of images.",1 +"abstract +we study the complexity of the problem detection pair. a detection pair of a graph +g is a pair (w, l) of sets of detectors with w ⊆ v (g), the watchers, and l ⊆ v (g), the +listeners, such that for every pair u, v of vertices that are not dominated by a watcher of w , +there is a listener of l whose distances to u and to v are different. the goal is to minimize +|w | + |l|. this problem generalizes the two classic problems dominating set and metric +dimension, that correspond to the restrictions l = ∅ and w = ∅, respectively. detection +pair was recently introduced by finbow, hartnell and young [a. s. finbow, b. l. hartnell +and j. r. young. the complexity of monitoring a network with both watchers and listeners. +networks, accepted], who proved it to be np-complete on trees, a surprising result given that +both dominating set and metric dimension are known to be linear-time solvable on trees. +it follows from an existing reduction by hartung and nichterlein for metric dimension that +even on bipartite subcubic graphs of arbitrarily large girth, detection pair is np-hard to +approximate within a sub-logarithmic factor and w[2]-hard (when parameterized by solution +size). we show, using a reduction to set cover, that detection pair is approximable +within a factor logarithmic in the number of vertices of the input graph. our two main results +are a linear-time 2-approximation algorithm and an fpt algorithm for detection pair on +trees. +keywords: graph theory, detection pair, metric dimension, dominating set, approximation algorithm, parameterized complexity",8 +"abstract. let g be the circulant graph cn (s) with s ⊆ {1, 2, . . . , ⌊ n2 ⌋}, and let i(g) +denote its the edge ideal in the ring r = k[x1 , . . . , xn ]. we consider the problem of +determining when g is cohen-macaulay, i.e, r/i(g) is a cohen-macaulay ring. because +a cohen-macaulay graph g must be well-covered, we focus on known families of wellcovered circulant graphs of the form cn (1, 2, . . . , d). we also characterize which cubic +circulant graphs are cohen-macaulay. we end with the observation that even though +the well-covered property is preserved under lexicographical products of graphs, this is +not true of the cohen-macaulay property.",0 +"abstractions for optimal checkpointing in +inversion problems +navjot kukreja∗",5 +"abstract +the k-co-path set problem asks, given a graph g and a positive +integer k, whether one can delete k edges from g so that the remainder +is a collection of disjoint paths. we give a linear-time fpt algorithm +with complexity o∗ (1.588k ) for deciding k-co-path set, significantly +improving the previously best known o∗ (2.17k ) of feng, zhou, and wang +(2015). our main tool is a new o∗ (4tw(g) ) algorithm for co-path set +using the cut&count framework, where tw(g) denotes treewidth. in +general graphs, we combine this with a branching algorithm which refines +a 6k-kernel into reduced instances, which we prove have bounded treewidth.",8 +"abstract. we present a parametric abstract domain for array content +analysis. the method maintains invariants for contiguous regions of the +array, similar to the methods of gopan, reps and sagiv, and of halbwachs and péron. however, it introduces a novel concept of an array +content graph, avoiding the need for an up-front factorial partitioning +step. the resulting analysis can be used with arbitrary numeric relational abstract domains; we evaluate the domain on a range of array +manipulating program fragments.",6 +"abstract +satellite radar altimetry is one of the most powerful techniques for measuring sea surface height variations, +with applications ranging from operational oceanography to climate research. over open oceans, altimeter +return waveforms generally correspond to the brown model, and by inversion, estimated shape parameters +provide mean surface height and wind speed. however, in coastal areas or over inland waters, the waveform +shape is often distorted by land influence, resulting in peaks or fast decaying trailing edges. as a result, +derived sea surface heights are then less accurate and waveforms need to be reprocessed by sophisticated +algorithms. to this end, this work suggests a novel spatio-temporal altimetry retracking (star) technique. we show that star enables the derivation of sea surface heights over the open ocean as well as +over coastal regions of at least the same quality as compared to existing retracking methods, but for a +larger number of cycles and thus retaining more useful data. novel elements of our method are (a) integrating information from spatially and temporally neighboring waveforms through a conditional random +field approach, (b) sub-waveform detection, where relevant sub-waveforms are separated from corrupted or +non-relevant parts through a sparse representation approach, and (c) identifying the final best set of sea +surfaces heights from multiple likely heights using dijkstra’s algorithm. we apply star to data from the +jason-1, jason-2 and envisat missions for study sites in the gulf of trieste, italy and in the coastal region +of the ganges-brahmaputra-meghna estuary, bangladesh. we compare to several established and recent +retracking methods, as well as to tide gauge data. our experiments suggest that the obtained sea surface +heights are significantly less affected by outliers when compared to results obtained by other approaches. +keywords: coastal, oceans, altimetry, retracking, sea surface heights, conditional random fields, sparse +representation",1 +"abstract +an extremely simple, description of karmarkar’s algorithm with very few +technical terms is given.",8 +"abstract. we prove that the word problem is undecidable in functionally recursive groups, and that the order problem is undecidable in automata groups, +even under the assumption that they are contracting.",4 +"abstract +consider a process satisfying a stochastic differential equation with unknown drift +parameter, and suppose that discrete observations are given. it is known that a simple +least squares estimator (lse) can be consistent, but numerically unstable in the sense +of large standard deviations under finite samples when the noise process has jumps. +we propose a filter to cut large shocks from data, and construct the same lse from +data selected by the filter. the proposed estimator can be asymptotically equivalent to +the usual lse, whose asymptotic distribution strongly depends on the noise process. +however, in numerical study, it looked asymptotically normal in an example where +filter was choosen suitably, and the noise was a lévy process. we will try to justify +this phenomenon mathematically, under certain restricted assumptions. +key words: stochastic differential equation, semimartingale noise, small noise +asymptotics, drift estimation, threshold estimator, mighty convergence. +msc2010: 62f12, 62m05; 60g52, 60j75",10 +"abstract +we study approximations of the partition function of dense graphical models. partition +functions of graphical models play a fundamental role is statistical physics, in statistics and in +machine learning. two of the main methods for approximating the partition function are markov +chain monte carlo and variational methods. an impressive body of work in mathematics, +physics and theoretical computer science provides conditions under which markov chain monte +carlo methods converge in polynomial time. these methods often lead to polynomial time +approximation algorithms for the partition function in cases where the underlying model exhibits +correlation decay. there are very few theoretical guarantees for the performance of variational +methods. one exception is recent results by risteski (2016) who considered dense graphical +models and showed that using variational methods, it is possible to find an o(ǫn) additive +2 +approximation to the log partition function in time no(1/ǫ ) even in a regime where correlation +decay does not hold. +we show that under essentially the same conditions, an o(ǫn) additive approximation of the +log partition function can be found in constant time, independent of n. in particular, our results +cover dense ising and potts models as well as dense graphical models with k-wise interaction. +they also apply for low threshold rank models. +to the best of our knowledge, our results are the first to give a constant time approximation +to log partition functions and the first to use the algorithmic regularity lemma for estimating +partition functions. as an application of our results we derive a constant time algorithm for +approximating the magnetization of ising and potts model on dense graphs.",8 +"abstract—millimeter-wave (mmwave) communication and +network densification hold great promise for achieving highrate communication in next-generation wireless networks. cloud +radio access network (cran), in which low-complexity remote +radio heads (rrhs) coordinated by a central unit (cu) are +deployed to serve users in a distributed manner, is a costeffective solution to achieve network densification. however, when +operating over a large bandwidth in the mmwave frequencies, +the digital fronthaul links in a cran would be easily saturated +by the large amount of sampled and quantized signals to be +transferred between rrhs and the cu. to tackle this challenge, +we propose in this paper a new architecture for mmwavebased cran with advanced lens antenna arrays at the rrhs. +due to the energy focusing property, lens antenna arrays are +effective in exploiting the angular sparsity of mmwave channels, +and thus help in substantially reducing the fronthaul rate and +simplifying the signal processing at the multi-antenna rrhs +and the cu, even when the channels are frequency-selective. +we consider the uplink transmission in a mmwave cran with +lens antenna arrays and propose a low-complexity quantization +bit allocation scheme for multiple antennas at each rrh to +meet the given fronthaul rate constraint. further, we propose +a channel estimation technique that exploits the energy focusing +property of the lens array and can be implemented at the cu +with low complexity. finally, we compare the proposed mmwave +cran using lens antenna arrays with a conventional cran +using uniform planar arrays at the rrhs, and show that the +proposed design achieves significant throughput gains, yet with +much lower complexity. +index terms—cloud radio access network, millimeter-wave +communication, lens antenna array, channel estimation, fronthaul constraint, antenna selection, quantization bit allocation.",7 +"abstract +an oracle is a design for potentially high power +artificial intelligences (ais), where the ai is +made safe by restricting it to only answer questions. unfortunately most designs cause the oracle to be motivated to manipulate humans with +the contents of their answers, and oracles of potentially high intelligence might be very successful at this. solving that problem, without compromising the accuracy of the answer, is tricky. +this paper reduces the issue to a cryptographicstyle problem of alice ensuring that her oracle +answers her questions while not providing key +information to an eavesdropping eve. two oracle designs solve this problem, one counterfactual (the oracle answers as if it expected its answer to never be read) and one on-policy, but limited by the quantity of information it can transmit.",2 +"abstract +this paper describes a generic algorithm for concurrent +resizing and on-demand per-bucket rehashing for an extensible +hash table. in contrast to known lock-based hash table algorithms, +the proposed algorithm separates the resizing and rehashing stages +so that they neither invalidate existing buckets nor block any +concurrent operations. instead, the rehashing work is deferred and +split across subsequent operations with the table. the rehashing +operation uses bucket-level synchronization only and therefore +allows a race condition between lookup and moving operations +running in different threads. instead of using explicit +synchronization, the algorithm detects the race condition and +restarts the lookup operation. in comparison with other lock-based +algorithms, the proposed algorithm reduces high-level +synchronization on the hot path, improving performance, +concurrency, and scalability of the table. the response time of the +operations is also more predictable. the algorithm is compatible +with cache friendly data layouts for buckets and does not depend +on any memory reclamation techniques thus potentially achieving +additional performance gain with corresponding implementations. +categories and subject descriptors: d.1.3 [programming +techniques]: concurrent programming – parallel programming; +d.4.1 [operating systems]: process management – +synchronization; +concurrency; +multiprocessing, +multiprogramming, +multitasking; +e.2 +[data +storage +representation] – hash-table representations. +general terms:",8 +"abstract +most reinforcement learning algorithms are inefficient for learning multiple tasks in complex robotic +systems, where different tasks share a set of actions. in such environments a compound policy +may be learnt with shared neural network parameters, which performs multiple tasks concurrently. +however such compound policy may get biased towards a task or the gradients from different tasks +negate each other, making the learning unstable +and sometimes less data efficient. in this paper, +we propose a new approach for simultaneous training of multiple tasks sharing a set of common actions in continuous action spaces, which we call as +digrad (differential policy gradient). the proposed framework is based on differential policy +gradients and can accommodate multi-task learning +in a single actor-critic network. we also propose +a simple heuristic in the differential policy gradient update to further improve the learning. the +proposed architecture was tested on 8 link planar +manipulator and 27 degrees of freedom(dof) humanoid for learning multi-goal reachability tasks +for 3 and 2 end effectors respectively. we show that +our approach supports efficient multi-task learning +in complex robotic systems, outperforming related +methods in continuous action spaces.",2 +"abstract—this paper deals with the certification problem +for robust quadratic stability, robust state convergence, and +robust quadratic performance of linear systems that exhibit +bounded rates of variation in their parameters. we consider both +continuous-time (ct) and discrete-time (dt) parameter-varying +systems. in this paper, we provide a uniform method for this +certification problem in both cases and we show that, contrary to +what was claimed previously, the dt case requires a significantly +different treatment compared to the existing ct results. in the +established uniform approach, quadratic lyapunov functions, +that are affine in the parameter, are used to certify robust +stability, robust convergence rates, and robust performance in +terms of linear matrix inequality feasibility tests. to exemplify +the procedure, we solve the certification problem for l2 -gain +performance both in the ct and the dt cases. a numerical +example is given to show that the proposed approach is less +conservative than a method with slack variables. +index terms—linear parameter-varying systems; parametervarying lyapunov functions; stability of linear systems; lmis.",3 +"abstract +blind source separation, i.e. extraction of independent sources from +a mixture, is an important problem for both artificial and natural signal +processing. here, we address a special case of this problem when sources +(but not the mixing matrix) are known to be nonnegative, for example, +due to the physical nature of the sources. we search for the solution to +this problem that can be implemented using biologically plausible neural +networks. specifically, we consider the online setting where the dataset +is streamed to a neural network. the novelty of our approach is that we +formulate blind nonnegative source separation as a similarity matching +problem and derive neural networks from the similarity matching objective. +importantly, synaptic weights in our networks are updated according to +biologically plausible local learning rules.",9 +"abstract +ultra-reliable, low latency communications (urllc) are currently attracting significant attention +due to the emergence of mission-critical applications and device-centric communication. urllc will +entail a fundamental paradigm shift from throughput-oriented system design towards holistic designs for +guaranteed and reliable end-to-end latency. a deep understanding of the delay performance of wireless +networks is essential for efficient urllc systems. in this paper, we investigate the network layer +performance of multiple-input, single-output (miso) systems under statistical delay constraints. we +provide closed-form expressions for miso diversity-oriented service process and derive probabilistic +delay bounds using tools from stochastic network calculus. in particular, we analyze transmit beamforming with perfect and imperfect channel knowledge and compare it with orthogonal space-time codes +and antenna selection. the effect of transmit power, number of antennas, and finite blocklength channel +coding on the delay distribution is also investigated. our higher layer performance results reveal key +insights of miso channels and provide useful guidelines for the design of ultra-reliable communication +systems that can guarantee the stringent urllc latency requirements.",7 +"abstract explores two extensions to this system, which we explain in the +context of the actor model (although they are equally applicable to a system using locks). rather +than rejecting programs where actors leak internal objects, we allow an actor to bestow its +synchronisation mechanism upon the exposed objects. this allows multiple objects to effectively +construct an actor’s interface. exposing internal operations externally makes concurrency more +fine-grained. to allow external control of the possible interleaving of these operations, we introduce +an atomic block that groups them together. the following section motivates these extensions. +v.t. vasconcelos and p. haller (eds.): workshop on programming language +approaches to concurrency- and communication-centric software (places’17) +eptcs 246, 2017, pp. 10–20, doi:10.4204/eptcs.246.4",6 +"abstract: high-dimensional linear regression with interaction effects is +broadly applied in research fields such as bioinformatics and social science. +in this paper, we first investigate the minimax rate of convergence for regression estimation in high-dimensional sparse linear models with two-way +interactions. we derive matching upper and lower bounds under three types +of heredity conditions: strong heredity, weak heredity and no heredity. from +the results: (i) a stronger heredity condition may or may not drastically +improve the minimax rate of convergence. in fact, in some situations, the +minimax rates of convergence are the same under all three heredity conditions; (ii) the minimax rate of convergence is determined by the maximum +of the total price of estimating the main effects and that of estimating the +interaction effects, which goes beyond purely comparing the order of the +number of non-zero main effects r1 and non-zero interaction effects r2 ; (iii) +under any of the three heredity conditions, the estimation of the interaction +terms may be the dominant part in determining the rate of convergence for +two different reasons: 1) there exist more interaction terms than main effect +terms or 2) a large ambient dimension makes it more challenging to estimate +even a small number of interaction terms. second, we construct an adaptive +estimator that achieves the minimax rate of convergence regardless of the +true heredity condition and the sparsity indices r1 , r2 . +msc 2010 subject classifications: primary 62c20; secondary 62j05. +keywords and phrases: minimax rate of convergence, sparsity, highdimensional regression, quadratic model, interaction selection, heredity condition, hierarchical structure, adaptive estimation.",10 +"abstract +this report studies data-driven estimation of the directed information (di) measure between +twoem discrete-time and continuous-amplitude random process, based on the k-nearest-neighbors +(k-nn) estimation framework. detailed derivations of two k-nn estimators are provided. the two +estimators differ in the metric based on which the nearest-neighbors are found. to facilitate the +estimation of the di measure, it is assumed that the observed sequences are (jointly) markovian +of order m. as m is generally not known, a data-driven method (that is also based on the k-nn +principle) for estimating m from the observed sequences is presented. an exhaustive numerical +study shows that the discussed k-nn estimators perform well even for relatively small number +of samples (few thousands). moreover, it is shown that the discussed estimators are capable of +accurately detecting linear as well as non-linear causal interactions.",7 +"abstract +graphical models use the intuitive and well-studied methods of graph theory to implicitly represent +dependencies between variables in large systems. they can model the global behaviour of a complex +system by specifying only local factors.this thesis studies inference in discrete graphical models from +an “algebraic perspective” and the ways inference can be used to express and approximate np-hard +combinatorial problems. +we investigate the complexity and reducibility of various inference problems, in part by organizing +them in an inference hierarchy. we then investigate tractable approximations for a subset of these +problems using distributive law in the form of message passing. the quality of the resulting message +passing procedure, called belief propagation (bp), depends on the influence of loops in the graphical +model. we contribute to three classes of approximations that improve bp for loopy graphs (i) loop +correction techniques; (ii) survey propagation, another message passing technique that surpasses bp +in some settings; and (iii) hybrid methods that interpolate between deterministic message passing and +markov chain monte carlo inference. +we then review the existing message passing solutions and provide novel graphical models and inference techniques for combinatorial problems under three broad classes: (i) constraint satisfaction problems (csps) such as satisfiability, coloring, packing, set / clique-cover and dominating / independent set +and their optimization counterparts; (ii) clustering problems such as hierarchical clustering, k-median, +k-clustering, k-center and modularity optimization; (iii) problems over permutations including (bottleneck) assignment, graph “morphisms” and alignment, finding symmetries and (bottleneck) traveling +salesman problem. in many cases we show that message passing is able to find solutions that are either +near optimal or favourably compare with today’s state-of-the-art approaches.",0 +"abstract +consider the standard nonparametric regression model and take as estimator the penalized least squares function. in this article, we study the trade-off +between closeness to the true function and complexity penalization of the estimator, where complexity is described by a seminorm on a class of functions. +first, we present an exponential concentration inequality revealing the concentration behavior of the trade-off of the penalized least squares estimator around +a nonrandom quantity, where such quantity depends on the problem under consideration. then, under some conditions and for the proper choice of the tuning +parameter, we obtain bounds for this nonrandom quantity. we illustrate our results with some examples that include the smoothing splines estimator. +keywords: concentration inequalities, regularized least squares, +statistical trade-off.",10 +"abstract. this is a short survey on existing upper and lower +bounds on the probability of the union of a finite number of events +using partial information given in terms of the individual or pairwise event probabilities (or their sums). new proofs for some of +the existing bounds are provided and new observations regarding +the existing gallot–kounias bound are given.",7 +"abstract—we present kleuren, a novel assembly-free +method to reconstruct phylogenetic trees using the colored +de bruijn graph. kleuren works by constructing the colored de bruijn graph and then traversing it, finding bubble +structures in the graph that provide phylogenetic signal. +the bubbles are then aligned and concatenated to form a +supermatrix, from which a phylogenetic tree is inferred. we +introduce the algorithms that kleuren uses to accomplish +this task, and show its performance on reconstructing the +phylogenetic tree of 12 drosophila species. kleuren reconstructed the established phylogenetic tree accurately and +is a viable tool for phylogenetic tree reconstruction using +whole genome sequences. software package available at: +https://github.com/colelyman/kleuren. +keywords-phylogenetics; algorithm; whole genome sequence; +colored de bruijn graph",8 +"abstract— we consider an energy harvesting transmitter sending status updates regarding a physical phenomenon it observes +to a receiver. different from the existing literature, we consider +a scenario where the status updates carry information about +an independent message. the transmitter encodes this message +into the timings of the status updates. the receiver needs to +extract this encoded information, as well as update the status +of the observed phenomenon. the timings of the status updates, +therefore, determine both the age of information (aoi) and the +message rate (rate). we study the tradeoff between the achievable +message rate and the achievable average aoi. we propose several +achievable schemes and compare their rate-aoi performances.",7 +"abstract +the evolution of the hemagglutinin amino acids sequences of influenza a virus is studied by a method +based on an informational metrics, originally introduced by rohlin for partitions in abstract probability +spaces. this metrics does not require any previous functional or syntactic knowledge about the sequences +and it is sensitive to the correlated variations in the characters disposition. its efficiency is improved by +algorithmic tools, designed to enhance the detection of the novelty and to reduce the noise of useless +mutations. we focus on the usa data from 1993/94 to 2010/2011 for a/h3n2 and on usa data from +2006/07 to 2010/2011 for a/h1n1 . we show that the clusterization of the distance matrix gives strong +evidence to a structure of domains in the sequence space, acting as weak attractors for the evolution, +in very good agreement with the epidemiological history of the virus. the structure proves very robust +with respect to the variations of the clusterization parameters, and extremely coherent when restricting +the observation window. the results suggest an efficient strategy in the vaccine forecast, based on the +presence of ”precursors” (or ”buds”) populating the most recent attractor.",5 +"abstract. finding the common structural features of two molecules is +a fundamental task in cheminformatics. most drugs are small molecules, +which can naturally be interpreted as graphs. hence, the task is formalized as maximum common subgraph problem. albeit the vast majority +of molecules yields outerplanar graphs this problem remains np-hard. +we consider a variation of the problem of high practical relevance, where +the rings of molecules must not be broken, i.e., the block and bridge structure of the input graphs must be retained by the common subgraph. we +present an algorithm for finding a maximum common connected induced +subgraph of two given outerplanar graphs subject to this constraint. our +approach runs in time o(∆n2 ) in outerplanar graphs on n vertices with +maximum degree ∆. this leads to a quadratic time complexity in molecular graphs, which have bounded degree. the experimental comparison +on synthetic and real-world datasets shows that our approach is highly +efficient in practice and outperforms comparable state-of-the-art algorithms.",8 +"abstract of thesis presented to lncc/mct in partial fulfillment of the requirements for the degree of doctor of sciences (d.sc.) +managing large-scale scientific hypotheses as +uncertain and probabilistic data +bernardo gonçalves +february - 2015",5 +"abstract +transversality is a simple and effective method for implementing quantum computation faulttolerantly. however, no quantum error-correcting code (qecc) can transversally implement a +quantum universal gate set (eastin and knill, phys. rev. lett., 102, 110502). since reversible +classical computation is often a dominating part of useful quantum computation, whether or not +it can be implemented transversally is an important open problem. we show that, other than +a small set of non-additive codes that we cannot rule out, no binary qecc can transversally +implement a classical reversible universal gate set. in particular, no such qecc can implement +the toffoli gate transversally. +we prove our result by constructing an information theoretically secure (but inefficient) quantum homomorphic encryption (its-qhe) scheme inspired by ouyang et al. (arxiv:1508.00938). +homomorphic encryption allows the implementation of certain functions directly on encrypted +data, i.e. homomorphically. our scheme builds on almost any qecc, and implements that +code’s transversal gate set homomorphically. we observe a restriction imposed by nayak’s +bound (focs 1999) on its-qhe, implying that any its quantum fully homomorphic scheme +(its-qfhe) implementing the full set of classical reversible functions must be highly inefficient. while our scheme incurs exponential overhead, any such qecc implementing toffoli +transversally would still violate this lower bound through our scheme.",7 +"abstract +minwise hashing is a fundamental and one of +the most successful hashing algorithm in the literature. recent advances based on the idea of +densification (shrivastava & li, 2014a;c) have +shown that it is possible to compute k minwise hashes, of a vector with d nonzeros, in +mere (d + k) computations, a significant improvement over the classical o(dk). these advances have led to an algorithmic improvement +in the query complexity of traditional indexing +algorithms based on minwise hashing. unfortunately, the variance of the current densification +techniques is unnecessarily high, which leads to +significantly poor accuracy compared to vanilla +minwise hashing, especially when the data is +sparse. in this paper, we provide a novel densification scheme which relies on carefully tailored +2-universal hashes. we show that the proposed +scheme is variance-optimal, and without losing +the runtime efficiency, it is significantly more accurate than existing densification techniques. as +a result, we obtain a significantly efficient hashing scheme which has the same variance and +collision probability as minwise hashing. experimental evaluations on real sparse and highdimensional datasets validate our claims. we believe that given the significant advantages, our +method will replace minwise hashing implementations in practice.",8 +"abstract +suppose that we wish to infer the value of a statistical parameter at a law from which we +sample independent observations. suppose that this parameter is smooth and that we can +define two variation-independent, infinite-dimensional features of the law, its so called q- and +g-components (comp.), such that if we estimate them consistently at a fast enough product of +rates, then we can build a confidence interval (ci) with a given asymptotic level based on a +plain targeted minimum loss estimator (tmle). the estimators of the q- and g-comp. would +typically be by products of machine learning algorithms. we focus on the case that the machine +learning algorithm for the g-comp. is fine-tuned by a real-valued parameter h. then, a plain +tmle with an h chosen by cross-validation would typically not lend itself to the construction +of a ci, because the selection of h would trade-off its empirical bias with something akin to +the empirical variance of the estimator of the g-comp. as opposed to that of the tmle. a +collaborative tmle (c-tmle) might, however, succeed in achieving the relevant trade-off. we +prove that this is the case indeed. +we construct a c-tmle and show that, under high-level empirical processes conditions, +and if there exists an oracle h that makes a bulky remainder term asymptotically gaussian, +then the c-tmle is asymptotically gaussian hence amenable to building a ci provided that +its asymptotic variance can be estimated too. the construction hinges on guaranteeing that +an additional, well chosen estimating equation is solved on top of the estimating equation that +a plain tmle solves. the optimal h is chosen by cross-validating an empirical criterion that +guarantees the wished trade-off between empirical bias and variance. +we illustrate the construction and main result with the inference of the so called average +treatment effect, where the q-comp. consists in a marginal law and a conditional expectation, +and the g-comp. is a propensity score (a conditional probability). we also conduct a multifaceted simulation study to investigate the empirical properties of the collaborative tmle when +the g-comp. is estimated by the lasso. here, h is the bound on the `1 -norm of the candidate +coefficients. the variety of scenarios shed light on small and moderate sample properties, in the +face of low-, moderate- or high-dimensional baseline covariates, and possibly positivity violation.",10