title
stringlengths 4
343
| abstract
stringlengths 4
4.48k
|
---|---|
universal supervised learning for individual data | universal supervised learning is considered from an information theoretic point of view following the universal prediction approach, see merhav and feder (1998). we consider the standard supervised "batch" learning where prediction is done on a test sample once the entire training data is observed, and the individual setting where the features and labels, both in the training and test, are specific individual quantities. the information theoretic approach naturally uses the self-information loss or log-loss. our results provide universal learning schemes that compete with a "genie" (or reference) that knows the true test label. in particular, it is demonstrated that the main proposed scheme, termed predictive normalized maximum likelihood (pnml), is a robust learning solution that outperforms the current leading approach based on empirical risk minimization (erm). furthermore, the pnml construction provides a pointwise indication for the learnability of the specific test challenge with the given training examples |
neural networks versus logistic regression for 30 days all-cause readmission prediction | heart failure (hf) is one of the leading causes of hospital admissions in the us. readmission within 30 days after a hf hospitalization is both a recognized indicator for disease progression and a source of considerable financial burden to the healthcare system. consequently, the identification of patients at risk for readmission is a key step in improving disease management and patient outcome. in this work, we used a large administrative claims dataset to (1)explore the systematic application of neural network-based models versus logistic regression for predicting 30 days all-cause readmission after discharge from a hf admission, and (2)to examine the additive value of patients' hospitalization timelines on prediction performance. based on data from 272,778 (49% female) patients with a mean (sd) age of 73 years (14) and 343,328 hf admissions (67% of total admissions), we trained and tested our predictive readmission models following a stratified 5-fold cross-validation scheme. among the deep learning approaches, a recurrent neural network (rnn) combined with conditional random fields (crf) model (rnncrf) achieved the best performance in readmission prediction with 0.642 auc (95% ci, 0.640-0.645). other models, such as those based on rnn, convolutional neural networks and crf alone had lower performance, with a non-timeline based model (mlp) performing worst. a competitive model based on logistic regression with lasso achieved a performance of 0.643 auc (95%ci, 0.640-0.646). we conclude that data from patient timelines improve 30 day readmission prediction for neural network-based models, that a logistic regression with lasso has equal performance to the best neural network model and that the use of administrative data result in competitive performance compared to published approaches based on richer clinical datasets. |
uncovering urban mobility and city dynamics from large-scale taxi origin-destination (o-d) trips: case study in washington dc area | we perform a systematic analysis on the large-scale taxi trip data to uncover urban mobility and city dynamics in multimodal urban transportation environments. as a case study, we use the taxi origin-destination trip data and some additional data sources in washington dc area. we first study basic characteristics of taxi trips, then focus on five important aspects. three of them concern urban mobility, which are respectively mobility and cost including effect of traffic congestion, trip safety, and multimodal connectivity; the other two pertain to city dynamics, which are respectively transportation resilience and the relation between trip patterns and land use. for these aspects, we use appropriate statistical methods and geographic techniques to mine patterns and characteristics from taxi trip data for better understanding qualitative and quantitative impacts of the inputs from key stakeholders on available measures of effectiveness on urban mobility and city dynamics, where key stakeholders include road users, system operators, and city. finally, we briefly summarize our findings and discuss some critical roles and implications of the uncovered patterns and characteristics from the relation between taxi system and key stakeholders. the results can support road users by providing evidence-based information of trip cost, mobility, safety, multimodal connectivity and transportation resilience, can assist taxi drivers and operators to deliver transportation services in a higher quality of mobility, safety and operational efficiency, and can also help city planners and policy makers to transform multimodal transportation and to manage urban resources in a more effective and better way. |
bayesian propagation of record linkage uncertainty into population size estimation of human rights violations | multiple-systems or capture-recapture estimation are common techniques for population size estimation, particularly in the quantitative study of human rights violations. these methods rely on multiple samples from the population, along with the information of which individuals appear in which samples. the goal of record linkage techniques is to identify unique individuals across samples based on the information collected on them. linkage decisions are subject to uncertainty when such information contains errors and missingness, and when different individuals have very similar characteristics. uncertainty in the linkage should be propagated into the stage of population size estimation. we propose an approach called linkage-averaging to propagate linkage uncertainty, as quantified by some bayesian record linkage methodologies, into a subsequent stage of population size estimation. linkage-averaging is a two-stage approach in which the results from the record linkage stage are fed into the population size estimation stage. we show that under some conditions the results of this approach correspond to those of a proper bayesian joint model for both record linkage and population size estimation. the two-stage nature of linkage-averaging allows us to combine different record linkage models with different capture-recapture models, which facilitates model exploration. we present a case study from the salvadoran civil war, where we are interested in estimating the total number of civilian killings using lists of witnesses' reports collected by different organizations. these lists contain duplicates, typographical and spelling errors, missingness, and other inaccuracies that lead to uncertainty in the linkage. we show how linkage-averaging can be used for transferring the uncertainty in the linkage of these lists into different models for population size estimation. |
can vaes generate novel examples? | an implicit goal in works on deep generative models is that such models should be able to generate novel examples that were not previously seen in the training data. in this paper, we investigate to what extent this property holds for widely employed variational autoencoder (vae) architectures. vaes maximize a lower bound on the log marginal likelihood, which implies that they will in principle overfit the training data when provided with a sufficiently expressive decoder. in the limit of an infinite capacity decoder, the optimal generative model is a uniform mixture over the training data. more generally, an optimal decoder should output a weighted average over the examples in the training data, where the magnitude of the weights is determined by the proximity in the latent space. this leads to the hypothesis that, for a sufficiently high capacity encoder and decoder, the vae decoder will perform nearest-neighbor matching according to the coordinates in the latent space. to test this hypothesis, we investigate generalization on the mnist dataset. we consider both generalization to new examples of previously seen classes, and generalization to the classes that were withheld from the training set. in both cases, we find that reconstructions are closely approximated by nearest neighbors for higher-dimensional parameterizations. when generalizing to unseen classes however, lower-dimensional parameterizations offer a clear advantage. |
estimating rationally inattentive utility functions with deep clustering for framing - applications in youtube engagement dynamics | we consider a framework involving behavioral economics and machine learning. rationally inattentive bayesian agents make decisions based on their posterior distribution, utility function and information acquisition cost renyi divergence which generalizes shannon mutual information). by observing these decisions, how can an observer estimate the utility function and information acquisition cost? using deep learning, we estimate framing information (essential extrinsic features) that determines the agent's attention strategy. then we present a preference based inverse reinforcement learning algorithm to test for rational inattention: is the agent an utility maximizer, attention maximizer, and does an information cost function exist that rationalizes the data? the test imposes a renyi mutual information constraint which impacts how the agent can select attention strategies to maximize their expected utility. the test provides constructive estimates of the utility function and information acquisition cost of the agent. we illustrate these methods on a massive youtube dataset for characterizing the commenting behavior of users. |
mixed membership recurrent neural networks | models for sequential data such as the recurrent neural network (rnn) often implicitly model a sequence as having a fixed time interval between observations and do not account for group-level effects when multiple sequences are observed. we propose a model for grouped sequential data based on the rnn that accounts for varying time intervals between observations in a sequence by learning a group-level base parameter to which each sequence can revert. our approach is motivated by the mixed membership framework, and we show how it can be used for dynamic topic modeling in which the distribution on topics (not the topics themselves) are evolving in time. we demonstrate our approach on a dataset of 3.4 million online grocery shopping orders made by 206k customers. |
improving context-aware semantic relationships in sparse mobile datasets | traditional semantic similarity models often fail to encapsulate the external context in which texts are situated. however, textual datasets generated on mobile platforms can help us build a truer representation of semantic similarity by introducing multimodal data. this is especially important in sparse datasets, making solely text-driven interpretation of context more difficult. in this paper, we develop new algorithms for building external features into sentence embeddings and semantic similarity scores. then, we test them on embedding spaces on data from twitter, using each tweet's time and geolocation to better understand its context. ultimately, we show that applying pca with eight components to the embedding space and appending multimodal features yields the best outcomes. this yields a considerable improvement over pure text-based approaches for discovering similar tweets. our results suggest that our new algorithm can help improve semantic understanding in various settings. |
artificial neural networks condensation: a strategy to facilitate adaption of machine learning in medical settings by reducing computational burden | machine learning (ml) applications on healthcare can have a great impact on people's lives helping deliver better and timely treatment to those in need. at the same time, medical data is usually big and sparse requiring important computational resources. although it might not be a problem for wide-adoption of ml tools in developed nations, availability of computational resource can very well be limited in third-world nations. this can prevent the less favored people from benefiting of the advancement in ml applications for healthcare. in this project we explored methods to increase computational efficiency of ml algorithms, in particular artificial neural nets (nn), while not compromising the accuracy of the predicted results. we used in-hospital mortality prediction as our case analysis based on the mimic iii publicly available dataset. we explored three methods on two different nn architectures. we reduced the size of recurrent neural net (rnn) and dense neural net (dnn) by applying pruning of "unused" neurons. additionally, we modified the rnn structure by adding a hidden-layer to the lstm cell allowing to use less recurrent layers for the model. finally, we implemented quantization on dnn forcing the weights to be 8-bits instead of 32-bits. we found that all our methods increased computational efficiency without compromising accuracy and some of them even achieved higher accuracy than the pre-condensed baseline models. |
computations in stochastic acceptors | machine learning provides algorithms that can learn from data and make inferences or predictions on data. stochastic acceptors or probabilistic automata are stochastic automata without output that can model components in machine learning scenarios. in this paper, we provide dynamic programming algorithms for the computation of input marginals and the acceptance probabilities in stochastic acceptors. furthermore, we specify an algorithm for the parameter estimation of the conditional probabilities using the expectation-maximization technique and a more efficient implementation related to the baum-welch algorithm. |
a note on the bayesian approach to sliding window detector development | recently a bayesian methodology has been introduced, enabling the construction of sliding window detectors with the constant false alarm rate property. the approach introduces a bayesian predictive inference approach, where under the assumption of no target, a predictive density of the cell under test, conditioned on the clutter range profile, is produced. the probability of false alarm can then be produced by integrating this density. as a result of this, for a given clutter model, the bayesian constant false alarm rate detector is produced. this note outlines how this approach can be extended, to allow the construction of alternative bayesian decision rules, based upon more useful measures of the clutter level. |
a multi-objective anytime rule mining system to ease iterative feedback from domain experts | data extracted from software repositories is used intensively in software engineering research, for example, to predict defects in source code. in our research in this area, with data from open source projects as well as an industrial partner, we noticed several shortcomings of conventional data mining approaches for classification problems: (1) domain experts' acceptance is of critical importance, and domain experts can provide valuable input, but it is hard to use this feedback. (2) the evaluation of the model is not a simple matter of calculating auc or accuracy. instead, there are multiple objectives of varying importance, but their importance cannot be easily quantified. furthermore, the performance of the model cannot be evaluated on a per-instance level in our case, because it shares aspects with the set cover problem. to overcome these problems, we take a holistic approach and develop a rule mining system that simplifies iterative feedback from domain experts and can easily incorporate the domain-specific evaluation needs. a central part of the system is a novel multi-objective anytime rule mining algorithm. the algorithm is based on the grasp-pr meta-heuristic but extends it with ideas from several other approaches. we successfully applied the system in the industrial context. in the current article, we focus on the description of the algorithm and the concepts of the system. we provide an implementation of the system for reuse. |
learning when to communicate at scale in multiagent cooperative and competitive tasks | learning when to communicate and doing that effectively is essential in multi-agent tasks. recent works show that continuous communication allows efficient training with back-propagation in multi-agent scenarios, but have been restricted to fully-cooperative tasks. in this paper, we present individualized controlled continuous communication model (ic3net) which has better training efficiency than simple continuous communication model, and can be applied to semi-cooperative and competitive settings along with the cooperative settings. ic3net controls continuous communication with a gating mechanism and uses individualized rewards foreach agent to gain better performance and scalability while fixing credit assignment issues. using variety of tasks including starcraft broodwars explore and combat scenarios, we show that our network yields improved performance and convergence rates than the baselines as the scale increases. our results convey that ic3net agents learn when to communicate based on the scenario and profitability. |
a determinantal point process for column subset selection | dimensionality reduction is a first step of many machine learning pipelines. two popular approaches are principal component analysis, which projects onto a small number of well chosen but non-interpretable directions, and feature selection, which selects a small number of the original features. feature selection can be abstracted as a numerical linear algebra problem called the column subset selection problem (cssp). cssp corresponds to selecting the best subset of columns of a matrix $x \in \mathbb{r}^{n \times d}$, where \emph{best} is often meant in the sense of minimizing the approximation error, i.e., the norm of the residual after projection of $x$ onto the space spanned by the selected columns. such an optimization over subsets of $\{1,\dots,d\}$ is usually impractical. one workaround that has been vastly explored is to resort to polynomial-cost, random subset selection algorithms that favor small values of this approximation error. we propose such a randomized algorithm, based on sampling from a projection determinantal point process (dpp), a repulsive distribution over a fixed number $k$ of indices $\{1,\dots,d\}$ that favors diversity among the selected columns. we give bounds on the ratio of the expected approximation error for this dpp over the optimal error of pca. these bounds improve over the state-of-the-art bounds of \emph{volume sampling} when some realistic structural assumptions are satisfied for $x$. numerical experiments suggest that our bounds are tight, and that our algorithms have comparable performance with the \emph{double phase} algorithm, often considered to be the practical state-of-the-art. column subset selection with dpps thus inherits the best of both worlds: good empirical performance and tight error bounds. |
bayesian point set registration | point set registration involves identifying a smooth invertible transformation between corresponding points in two point sets, one of which may be smaller than the other and possibly corrupted by observation noise. this problem is traditionally decomposed into two separate optimization problems: (i) assignment or correspondence, and (ii) identification of the optimal transformation between the ordered point sets. in this work, we propose an approach solving both problems simultaneously. in particular, a coherent bayesian formulation of the problem results in a marginal posterior distribution on the transformation, which is explored within a markov chain monte carlo scheme. motivated by atomic probe tomography (apt), in the context of structure inference for high entropy alloys (hea), we focus on the registration of noisy sparse observations of rigid transformations of a known reference configuration.lastly, we test our method on synthetic data sets. |
model selection for mixture models - perspectives and strategies | determining the number g of components in a finite mixture distribution is an important and difficult inference issue. this is a most important question, because statistical inference about the resulting model is highly sensitive to the value of g. selecting an erroneous value of g may produce a poor density estimate. this is also a most difficult question from a theoretical perspective as it relates to unidentifiability issues of the mixture model. this is further a most relevant question from a practical viewpoint since the meaning of the number of components g is strongly related to the modelling purpose of a mixture distribution. we distinguish in this chapter between selecting g as a density estimation problem in section 2 and selecting g in a model-based clustering framework in section 3. both sections discuss frequentist as well as bayesian approaches. we present here some of the bayesian solutions to the different interpretations of picking the "right" number of components in a mixture, before concluding on the ill-posed nature of the question. |
group preserving label embedding for multi-label classification | multi-label learning is concerned with the classification of data with multiple class labels. this is in contrast to the traditional classification problem where every data instance has a single label. due to the exponential size of output space, exploiting intrinsic information in feature and label spaces has been the major thrust of research in recent years and use of parametrization and embedding have been the prime focus. researchers have studied several aspects of embedding which include label embedding, input embedding, dimensionality reduction and feature selection. these approaches differ from one another in their capability to capture other intrinsic properties such as label correlation, local invariance etc. we assume here that the input data form groups and as a result, the label matrix exhibits a sparsity pattern and hence the labels corresponding to objects in the same group have similar sparsity. in this paper, we study the embedding of labels together with the group information with an objective to build an efficient multi-label classification. we assume the existence of a low-dimensional space onto which the feature vectors and label vectors can be embedded. in order to achieve this, we address three sub-problems namely; (1) identification of groups of labels; (2) embedding of label vectors to a low rank-space so that the sparsity characteristic of individual groups remains invariant; and (3) determining a linear mapping that embeds the feature vectors onto the same set of points, as in stage 2, in the low-dimensional space. we compare our method with seven well-known algorithms on twelve benchmark data sets. our experimental analysis manifests the superiority of our proposed method over state-of-art algorithms for multi-label learning. |
self-attention equipped graph convolutions for disease prediction | multi-modal data comprising imaging (mri, fmri, pet, etc.) and non-imaging (clinical test, demographics, etc.) data can be collected together and used for disease prediction. such diverse data gives complementary information about the patient\'s condition to make an informed diagnosis. a model capable of leveraging the individuality of each multi-modal data is required for better disease prediction. we propose a graph convolution based deep model which takes into account the distinctiveness of each element of the multi-modal data. we incorporate a novel self-attention layer, which weights every element of the demographic data by exploring its relation to the underlying disease. we demonstrate the superiority of our developed technique in terms of computational speed and performance when compared to state-of-the-art methods. our method outperforms other methods with a significant margin. |
overparameterized nonlinear learning: gradient descent takes the shortest path? | many modern learning tasks involve fitting nonlinear models to data which are trained in an overparameterized regime where the parameters of the model exceed the size of the training dataset. due to this overparameterization, the training loss may have infinitely many global minima and it is critical to understand the properties of the solutions found by first-order optimization schemes such as (stochastic) gradient descent starting from different initializations. in this paper we demonstrate that when the loss has certain properties over a minimally small neighborhood of the initial point, first order methods such as (stochastic) gradient descent have a few intriguing properties: (1) the iterates converge at a geometric rate to a global optima even when the loss is nonconvex, (2) among all global optima of the loss the iterates converge to one with a near minimal distance to the initial point, (3) the iterates take a near direct route from the initial point to this global optima. as part of our proof technique, we introduce a new potential function which captures the precise tradeoff between the loss function and the distance to the initial point as the iterations progress. for stochastic gradient descent (sgd), we develop novel martingale techniques that guarantee sgd never leaves a small neighborhood of the initialization, even with rather large learning rates. we demonstrate the utility of our general theory for a variety of problem domains spanning low-rank matrix recovery to neural network training. underlying our analysis are novel insights that may have implications for training and generalization of more sophisticated learning problems including those involving deep neural network architectures. |
parallel clustering of single cell transcriptomic data with split-merge sampling on dirichlet process mixtures | motivation: with the development of droplet based systems, massive single cell transcriptome data has become available, which enables analysis of cellular and molecular processes at single cell resolution and is instrumental to understanding many biological processes. while state-of-the-art clustering methods have been applied to the data, they face challenges in the following aspects: (1) the clustering quality still needs to be improved; (2) most models need prior knowledge on number of clusters, which is not always available; (3) there is a demand for faster computational speed. results: we propose to tackle these challenges with parallel split merge sampling on dirichlet process mixture model (the para-dpmm model). unlike classic dpmm methods that perform sampling on each single data point, the split merge mechanism samples on the cluster level, which significantly improves convergence and optimality of the result. the model is highly parallelized and can utilize the computing power of high performance computing (hpc) clusters, enabling massive clustering on huge datasets. experiment results show the model outperforms current widely used models in both clustering quality and computational speed. availability: source code is publicly available on https://github.com/tiehangd/para_dpmm/tree/master/para_dpmm_package |
sequence to sequence learning for query expansion | using sequence to sequence algorithms for query expansion has not been explored yet in information retrieval literature nor in question-answering's. we tried to fill this gap in the literature with a custom query expansion engine trained and tested on open datasets. starting from open datasets, we built a query expansion training set using sentence-embeddings-based keyword extraction. we therefore assessed the ability of the sequence to sequence neural networks to capture expanding relations in the words embeddings' space. |
mixed-order spectral clustering for networks | clustering is fundamental for gaining insights from complex networks, and spectral clustering (sc) is a popular approach. conventional sc focuses on second-order structures (e.g., edges connecting two nodes) without direct consideration of higher-order structures (e.g., triangles and cliques). this has motivated sc extensions that directly consider higher-order structures. however, both approaches are limited to considering a single order. this paper proposes a new mixed-order spectral clustering (mosc) approach to model both second-order and third-order structures simultaneously, with two mosc methods developed based on graph laplacian (gl) and random walks (rw). mosc-gl combines edge and triangle adjacency matrices, with theoretical performance guarantee. mosc-rw combines first-order and second-order random walks for a probabilistic interpretation. we automatically determine the mixing parameter based on cut criteria or triangle density, and construct new structure-aware error metrics for performance evaluation. experiments on real-world networks show 1) the superior performance of two mosc methods over existing sc methods, 2) the effectiveness of the mixing parameter determination strategy, and 3) insights offered by the structure-aware error metrics. |
an algorithm for computing the t-signature of two-state networks | due to the importance of signature vector in studying the reliability of networks, some methods have been proposed by researchers to obtain the signature. the notion of signature is used when at most one link may fail at each time instant. it is more realistic to consider the case where non of the components, one component or more than one component of the network may be destroyed at each time. motivated by this, the concept of t-signature has been recently defined to get the reliability of such a network. the t-signature is a probability vector and depends only on the network structure. in this paper, we propose an algorithm to compute the t-signature. the performance of the proposed algorithm is evaluated for some networks. |
dropout regularization in hierarchical mixture of experts | dropout is a very effective method in preventing overfitting and has become the go-to regularizer for multi-layer neural networks in recent years. hierarchical mixture of experts is a hierarchically gated model that defines a soft decision tree where leaves correspond to experts and decision nodes correspond to gating models that softly choose between its children, and as such, the model defines a soft hierarchical partitioning of the input space. in this work, we propose a variant of dropout for hierarchical mixture of experts that is faithful to the tree hierarchy defined by the model, as opposed to having a flat, unitwise independent application of dropout as one has with multi-layer perceptrons. we show that on a synthetic regression data and on mnist and cifar-10 datasets, our proposed dropout mechanism prevents overfitting on trees with many levels improving generalization and providing smoother fits. |
rstap: an r package for spatial temporal aggregated predictor models | the rstap package implements bayesian spatial temporal aggregated predictor models in r using the probabilistic programming language stan. a variety of distributions and link functions are supported, allowing users to fit this extension to the generalized linear model with both independent and correlated outcomes. |
a new concept of deep reinforcement learning based augmented general sequence tagging system | in this paper, a new deep reinforcement learning based augmented general sequence tagging system is proposed. the new system contains two parts: a deep neural network (dnn) based sequence tagging model and a deep reinforcement learning (drl) based augmented tagger. the augmented tagger helps improve system performance by modeling the data with minority tags. the new system is evaluated on slu and nlu sequence tagging tasks using atis and conll-2003 benchmark datasets, to demonstrate the new system's outstanding performance on general tagging tasks. evaluated by f1 scores, it shows that the new system outperforms the current state-of-the-art model on atis dataset by 1.9% and that on conll-2003 dataset by 1.4%. |
comparing spatial regression to random forests for large environmental data sets | environmental data may be "large" due to number of records, number of covariates, or both. random forests has a reputation for good predictive performance when using many covariates with nonlinear relationships, whereas spatial regression, when using reduced rank methods, has a reputation for good predictive performance when using many records that are spatially autocorrelated. in this study, we compare these two techniques using a data set containing the macroinvertebrate multimetric index (mmi) at 1859 stream sites with over 200 landscape covariates. a primary application is mapping mmi predictions and prediction errors at 1.1 million perennial stream reaches across the conterminous united states. for the spatial regression model, we develop a novel transformation procedure that estimates box-cox transformations to linearize covariate relationships and handles possibly zero-inflated covariates. we find that the spatial regression model with transformations, and a subsequent selection of significant covariates, has cross-validation performance slightly better than random forests. we also find that prediction interval coverage is close to nominal for each method, but that spatial regression prediction intervals tend to be narrower and have less variability than quantile regression forest prediction intervals. a simulation study is used to generalize results and clarify advantages of each modeling approach. |
optimizing market making using multi-agent reinforcement learning | in this paper, reinforcement learning is applied to the problem of optimizing market making. a multi-agent reinforcement learning framework is used to optimally place limit orders that lead to successful trades. the framework consists of two agents. the macro-agent optimizes on making the decision to buy, sell, or hold an asset. the micro-agent optimizes on placing limit orders within the limit order book. for the context of this paper, the proposed framework is applied and studied on the bitcoin cryptocurrency market. the goal of this paper is to show that reinforcement learning is a viable strategy that can be applied to complex problems (with complex environments) such as market making. |
a review on the use of deep learning in android malware detection | android is the predominant mobile operating system for the past few years. the prevalence of devices that can be powered by android magnetized not merely application developers but also malware developers with criminal intention to design and spread malicious applications that can affect the normal work of android phones and tablets, steal personal information and credential data, or even worse lock the phone and ask for ransom. researchers persistently devise countermeasures strategies to fight back malware. one of these strategies applied in the past five years is the use of deep learning methods in android malware detection. this necessitates a review to inspect the accomplished work in order to know where the endeavors have been established, identify unresolved problems, and motivate future research directions. in this work, an extensive survey of static analysis, dynamic analysis, and hybrid analysis that utilized deep learning methods are reviewed with an elaborated discussion on their key concepts, contributions, and limitations. |
the global anchor method for quantifying linguistic shifts and domain adaptation | language is dynamic, constantly evolving and adapting with respect to time, domain or topic. the adaptability of language is an active research area, where researchers discover social, cultural and domain-specific changes in language using distributional tools such as word embeddings. in this paper, we introduce the global anchor method for detecting corpus-level language shifts. we show both theoretically and empirically that the global anchor method is equivalent to the alignment method, a widely-used method for comparing word embeddings, in terms of detecting corpus-level language shifts. despite their equivalence in terms of detection abilities, we demonstrate that the global anchor method is superior in terms of applicability as it can compare embeddings of different dimensionalities. furthermore, the global anchor method has implementation and parallelization advantages. we show that the global anchor method reveals fine structures in the evolution of language and domain adaptation. when combined with the graph laplacian technique, the global anchor method recovers the evolution trajectory and domain clustering of disparate text corpora. |
ecg segmentation by neural networks: errors and correction | in this study we examined the question of how error correction occurs in an ensemble of deep convolutional networks, trained for an important applied problem: segmentation of electrocardiograms(ecg). we also explore the possibility of using the information about ensemble errors to evaluate a quality of data representation, built by the network. this possibility arises from the effect of distillation of outliers, which was demonstarted for the ensemble, described in this paper. |
detecting weak and strong islamophobic hate speech on social media | islamophobic hate speech on social media inflicts considerable harm on both targeted individuals and wider society, and also risks reputational damage for the host platforms. accordingly, there is a pressing need for robust tools to detect and classify islamophobic hate speech at scale. previous research has largely approached the detection of islamophobic hate speech on social media as a binary task. however, the varied nature of islamophobia means that this is often inappropriate for both theoretically-informed social science and effectively monitoring social media. drawing on in-depth conceptual work we build a multi-class classifier which distinguishes between non-islamophobic, weak islamophobic and strong islamophobic content. accuracy is 77.6% and balanced accuracy is 83%. we apply the classifier to a dataset of 109,488 tweets produced by far right twitter accounts during 2017. whilst most tweets are not islamophobic, weak islamophobia is considerably more prevalent (36,963 tweets) than strong (14,895 tweets). our main input feature is a glove word embeddings model trained on a newly collected corpus of 140 million tweets. it outperforms a generic word embeddings model by 5.9 percentage points, demonstrating the importan4ce of context. unexpectedly, we also find that a one-against-one multi class svm outperforms a deep learning algorithm. |
word embedding based on low-rank doubly stochastic matrix decomposition | word embedding, which encodes words into vectors, is an important starting point in natural language processing and commonly used in many text-based machine learning tasks. however, in most current word embedding approaches, the similarity in embedding space is not optimized in the learning. in this paper we propose a novel neighbor embedding method which directly learns an embedding simplex where the similarities between the mapped words are optimal in terms of minimal discrepancy to the input neighborhoods. our method is built upon two-step random walks between words via topics and thus able to better reveal the topics among the words. experiment results indicate that our method, compared with another existing word embedding approach, is more favorable for various queries. |
machine learning and ai research for patient benefit: 20 critical questions on transparency, replicability, ethics and effectiveness | machine learning (ml), artificial intelligence (ai) and other modern statistical methods are providing new opportunities to operationalize previously untapped and rapidly growing sources of data for patient benefit. whilst there is a lot of promising research currently being undertaken, the literature as a whole lacks: transparency; clear reporting to facilitate replicability; exploration for potential ethical concerns; and, clear demonstrations of effectiveness. there are many reasons for why these issues exist, but one of the most important that we provide a preliminary solution for here is the current lack of ml/ai- specific best practice guidance. although there is no consensus on what best practice looks in this field, we believe that interdisciplinary groups pursuing research and impact projects in the ml/ai for health domain would benefit from answering a series of questions based on the important issues that exist when undertaking work of this nature. here we present 20 questions that span the entire project life cycle, from inception, data analysis, and model evaluation, to implementation, as a means to facilitate project planning and post-hoc (structured) independent evaluation. by beginning to answer these questions in different settings, we can start to understand what constitutes a good answer, and we expect that the resulting discussion will be central to developing an international consensus framework for transparent, replicable, ethical and effective research in artificial intelligence (ai-tree) for health. |
studying oppressive cityscapes of bangladesh | in a densely populated city like dhaka (bangladesh), a growing number of high-rise buildings is an inevitable reality. however, they pose mental health risks for citizens in terms of detachment from natural light, sky view, greenery, and environmental landscapes. the housing economy and rent structure in different areas may or may not take account of such environmental factors. in this paper, we build a computer vision based pipeline to study factors like sky visibility, greenery in the sidewalks, and dominant colors present in streets from a pedestrian's perspective. we show that people in lower economy classes may suffer from lower sky visibility, whereas people in higher economy classes may suffer from lack of greenery in their environment, both of which could be possibly addressed by implementing rent restructuring schemes. |
machine learning in official statistics | in the first half of 2018, the federal statistical office of germany (destatis) carried out a "proof of concept machine learning" as part of its digital agenda. a major component of this was surveys on the use of machine learning methods in official statistics, which were conducted at selected national and international statistical institutions and among the divisions of destatis. it was of particular interest to find out in which statistical areas and for which tasks machine learning is used and which methods are applied. this paper is intended to make the results of the surveys publicly accessible. |
large multistream data analytics for monitoring and diagnostics in manufacturing systems | the high-dimensionality and volume of large scale multistream data has inhibited significant research progress in developing an integrated monitoring and diagnostics (m&d) approach. this data, also categorized as big data, is becoming common in manufacturing plants. in this paper, we propose an integrated m\&d approach for large scale streaming data. we developed a novel monitoring method named adaptive principal component monitoring (apc) which adaptively chooses pcs that are most likely to vary due to the change for early detection. importantly, we integrate a novel diagnostic approach, principal component signal recovery (pcsr), to enable a streamlined spc. this diagnostics approach draws inspiration from compressed sensing and uses adaptive lasso for identifying the sparse change in the process. we theoretically motivate our approaches and do a performance evaluation of our integrated m&d method through simulations and case studies. |
structure learning of sparse ggms over multiple access networks | a central machine is interested in estimating the underlying structure of a sparse gaussian graphical model (ggm) from datasets distributed across multiple local machines. the local machines can communicate with the central machine through a wireless multiple access channel. in this paper, we are interested in designing effective strategies where reliable learning is feasible under power and bandwidth limitations. two approaches are proposed: signs and uncoded methods. in signs method, the local machines quantize their data into binary vectors and an optimal channel coding scheme is used to reliably send the vectors to the central machine where the structure is learned from the received data. in uncoded method, data symbols are scaled and transmitted through the channel. the central machine uses the received noisy symbols to recover the structure. theoretical results show that both methods can recover the structure with high probability for large enough sample size. experimental results indicate the superiority of signs method over uncoded method under several circumstances. |
multimodal deep learning for short-term stock volatility prediction | stock market volatility forecasting is a task relevant to assessing market risk. we investigate the interaction between news and prices for the one-day-ahead volatility prediction using state-of-the-art deep learning approaches. the proposed models are trained either end-to-end or using sentence encoders transfered from other tasks. we evaluate a broad range of stock market sectors, namely consumer staples, energy, utilities, heathcare, and financials. our experimental results show that adding news improves the volatility forecasting as compared to the mainstream models that rely only on price data. in particular, our model outperforms the widely-recognized garch(1,1) model for all sectors in terms of coefficient of determination $r^2$, $mse$ and $mae$, achieving the best performance when training from both news and price data. |
a greedy approach to $\ell_{0,\infty}$ based convolutional sparse coding | sparse coding techniques for image processing traditionally rely on a processing of small overlapping patches separately followed by averaging. this has the disadvantage that the reconstructed image no longer obeys the sparsity prior used in the processing. for this purpose convolutional sparse coding has been introduced, where a shift-invariant dictionary is used and the sparsity of the recovered image is maintained. most such strategies target the $\ell_0$ "norm" or the $\ell_1$ norm of the whole image, which may create an imbalanced sparsity across various regions in the image. in order to face this challenge, the $\ell_{0,\infty}$ "norm" has been proposed as an alternative that "operates locally while thinking globally". the approaches taken for tackling the non-convexity of these optimization problems have been either using a convex relaxation or local pursuit algorithms. in this paper, we present an efficient greedy method for sparse coding and dictionary learning, which is specifically tailored to $\ell_{0,\infty}$, and is based on matching pursuit. we demonstrate the usage of our approach in salt-and-pepper noise removal and image inpainting. a code package which reproduces the experiments presented in this work is available at https://web.eng.tau.ac.il/~raja |
automatic summarization of natural language | automatic summarization of natural language is a current topic in computer science research and industry, studied for decades because of its usefulness across multiple domains. for example, summarization is necessary to create reviews such as this one. research and applications have achieved some success in extractive summarization (where key sentences are curated), however, abstractive summarization (synthesis and re-stating) is a hard problem and generally unsolved in computer science. this literature review contrasts historical progress up through current state of the art, comparing dimensions such as: extractive vs. abstractive, supervised vs. unsupervised, nlp (natural language processing) vs knowledge-based, deep learning vs algorithms, structured vs. unstructured sources, and measurement metrics such as rouge and bleu. multiple dimensions are contrasted since current research uses combinations of approaches as seen in the review matrix. throughout this summary, synthesis and critique is provided. this review concludes with insights for improved abstractive summarization measurement, with surprising implications for detecting understanding and comprehension in general. |
bayesian approach for parameter estimation of continuous-time stochastic volatility models using fourier transform methods | we propose a two stage procedure for the estimation of the parameters of a fairly general, continuous-time stochastic volatility. an important ingredient of the proposed method is the cuchiero-teichmann volatility estimator, which is based on fourier transforms and provides a continuous time estimate of the latent process. this estimate is then used to construct an approximate likelihood for the parameters of interest, whose restrictions are taken into account through prior distributions. the procedure is shown to be highly successful for constructing the posterior distribution of the parameters of a heston model, while limited success is achieved when applied to the highly parametrized exponential-ornstein-uhlenbeck. |
blinkml: efficient maximum likelihood estimation with probabilistic guarantees | the rising volume of datasets has made training machine learning (ml) models a major computational cost in the enterprise. given the iterative nature of model and parameter tuning, many analysts use a small sample of their entire data during their initial stage of analysis to make quick decisions (e.g., what features or hyperparameters to use) and use the entire dataset only in later stages (i.e., when they have converged to a specific model). this sampling, however, is performed in an ad-hoc fashion. most practitioners cannot precisely capture the effect of sampling on the quality of their model, and eventually on their decision-making process during the tuning phase. moreover, without systematic support for sampling operators, many optimizations and reuse opportunities are lost. in this paper, we introduce blinkml, a system for fast, quality-guaranteed ml training. blinkml allows users to make error-computation tradeoffs: instead of training a model on their full data (i.e., full model), blinkml can quickly train an approximate model with quality guarantees using a sample. the quality guarantees ensure that, with high probability, the approximate model makes the same predictions as the full model. blinkml currently supports any ml model that relies on maximum likelihood estimation (mle), which includes generalized linear models (e.g., linear regression, logistic regression, max entropy classifier, poisson regression) as well as ppca (probabilistic principal component analysis). our experiments show that blinkml can speed up the training of large-scale ml tasks by 6.26x-629x while guaranteeing the same predictions, with 95% probability, as the full model. |
deconfounding reinforcement learning in observational settings | we propose a general formulation for addressing reinforcement learning (rl) problems in settings with observational data. that is, we consider the problem of learning good policies solely from historical data in which unobserved factors (confounders) affect both observed actions and rewards. our formulation allows us to extend a representative rl algorithm, the actor-critic method, to its deconfounding variant, with the methodology for this extension being easily applied to other rl algorithms. in addition to this, we develop a new benchmark for evaluating deconfounding rl algorithms by modifying the openai gym environments and the mnist dataset. using this benchmark, we demonstrate that the proposed algorithms are superior to traditional rl methods in confounded environments with observational data. to the best of our knowledge, this is the first time that confounders are taken into consideration for addressing full rl problems with observational data. code is available at https://github.com/causalrl/drl. |
learning dynamic generator model by alternating back-propagation through time | this paper studies the dynamic generator model for spatial-temporal processes such as dynamic textures and action sequences in video data. in this model, each time frame of the video sequence is generated by a generator model, which is a non-linear transformation of a latent state vector, where the non-linear transformation is parametrized by a top-down neural network. the sequence of latent state vectors follows a non-linear auto-regressive model, where the state vector of the next frame is a non-linear transformation of the state vector of the current frame as well as an independent noise vector that provides randomness in the transition. the non-linear transformation of this transition model can be parametrized by a feedforward neural network. we show that this model can be learned by an alternating back-propagation through time algorithm that iteratively samples the noise vectors and updates the parameters in the transition model and the generator model. we show that our training method can learn realistic models for dynamic textures and action patterns. |
bayesian fusion estimation via t-shrinkage | shrinkage prior has gained great successes in many data analysis, however, its applications mostly focus on the bayesian modeling of sparse parameters. in this work, we will apply bayesian shrinkage to model high dimensional parameter that possesses an unknown blocking structure. we propose to impose heavy-tail shrinkage prior, e.g., $t$ prior, on the differences of successive parameter entries, and such a fusion prior will shrink successive differences towards zero and hence induce posterior blocking. comparing to conventional bayesian fused lasso which implements laplace fusion prior, $t$ fusion prior induces stronger shrinkage effect and enjoys a nice posterior consistency property. simulation studies and real data analyses show that $t$ fusion has superior performance to the frequentist fusion estimator and bayesian laplace-fusion prior. this $t$-fusion strategy is further developed to conduct a bayesian clustering analysis, and simulation shows that the proposed algorithm obtains better posterior distributional convergence than the classical dirichlet process modeling. |
asymptotic distribution of centralized $r$ when sampling from cauchy | assume that $x$ and $y$ are independent random variables, each having a cauchy distribution with a known median. taking a random independent sample of size $n$ of each $x$ and $y$, one can then compute their centralized empirical correlation coefficient $r$. analytically investigating the sampling distribution of this $r$ appears possible only in the large $n$ limit; this is what we have done in this article, deriving several new and interesting results. |
sampling on the sphere from $f(x) \propto x^tax$ | a method for drawing random samples of unit vectors $x$ in $r^p$ with density proportional to $x^tax$ where $a$ is a symmetric, positive definite matrix. includes an r function which implements the method. |
power comparison between high dimensional t-test, sign, and signed rank tests | in this paper, we propose a power comparison between high dimensional t-test, sign and signed rank test for the one sample mean test. we show that the high dimensional signed rank test is superior to a high dimensional t test, but inferior to a high dimensional sign test. |
sparse nonnegative candecomp/parafac decomposition in block coordinate descent framework: a comparison study | nonnegative candecomp/parafac (ncp) decomposition is an important tool to process nonnegative tensor. sometimes, additional sparse regularization is needed to extract meaningful nonnegative and sparse components. thus, an optimization method for ncp that can impose sparsity efficiently is required. in this paper, we construct ncp with sparse regularization (sparse ncp) by l1-norm. several popular optimization methods in block coordinate descent framework are employed to solve the sparse ncp, all of which are deeply analyzed with mathematical solutions. we compare these methods by experiments on synthetic and real tensor data, both of which contain third-order and fourth-order cases. after comparison, the methods that have fast computation and high effectiveness to impose sparsity will be concluded. in addition, we proposed an accelerated method to compute the objective function and relative error of sparse ncp, which has significantly improved the computation of tensor decomposition especially for higher-order tensor. |
sampling using neural networks for colorizing the grayscale images | the main idea of this paper is to explore the possibilities of generating samples from the neural networks, mostly focusing on the colorization of the grey-scale images. i will compare the existing methods for colorization and explore the possibilities of using new generative modeling to the task of colorization. the contributions of this paper are to compare the existing structures with similar generating structures(decoders) and to apply the novel structures including conditional vae(cvae), conditional wasserstein gan with gradient penalty(cwgan-gp), cwgan-gp with l1 reconstruction loss, adversarial generative encoders(age) and introspective vae(ivae). i trained these models using cifar-10 images. to measure the performance, i use inception score(is) which measures how distinctive each image is and how diverse overall samples are as well as human eyes for cifar-10 images. it turns out that cvae with l1 reconstruction loss and ivae achieve the highest score in is. cwgan-gp with l1 tends to learn faster than cwgan-gp, but is does not increase from cwgan-gp. cwgan-gp tends to generate more diverse images than other models using reconstruction loss. also, i figured out that the proper regularization plays a vital role in generative modeling. |
robustness to out-of-distribution inputs via task-aware generative uncertainty | deep learning provides a powerful tool for machine perception when the observations resemble the training data. however, real-world robotic systems must react intelligently to their observations even in unexpected circumstances. this requires a system to reason about its own uncertainty given unfamiliar, out-of-distribution observations. approximate bayesian approaches are commonly used to estimate uncertainty for neural network predictions, but can struggle with out-of-distribution observations. generative models can in principle detect out-of-distribution observations as those with a low estimated density. however, the mere presence of an out-of-distribution input does not by itself indicate an unsafe situation. in this paper, we present a method for uncertainty-aware robotic perception that combines generative modeling and model uncertainty to cope with uncertainty stemming from out-of-distribution states. our method estimates an uncertainty measure about the model's prediction, taking into account an explicit (generative) model of the observation distribution to handle out-of-distribution inputs. this is accomplished by probabilistically projecting observations onto the training distribution, such that out-of-distribution inputs map to uncertain in-distribution observations, which in turn produce uncertain task-related predictions, but only if task-relevant parts of the image change. we evaluate our method on an action-conditioned collision prediction task with both simulated and real data, and demonstrate that our method of projecting out-of-distribution observations improves the performance of four standard bayesian and non-bayesian neural network approaches, offering more favorable trade-offs between the proportion of time a robot can remain autonomous and the proportion of impending crashes successfully avoided. |
neuromemrisitive architecture of htm with on-device learning and neurogenesis | hierarchical temporal memory (htm) is a biomimetic sequence memory algorithm that holds promise for invariant representations of spatial and spatiotemporal inputs. this paper presents a comprehensive neuromemristive crossbar architecture for the spatial pooler (sp) and the sparse distributed representation classifier, which are fundamental to the algorithm. there are several unique features in the proposed architecture that tightly link with the htm algorithm. a memristor that is suitable for emulating the htm synapses is identified and a new z-window function is proposed. the architecture exploits the concept of synthetic synapses to enable potential synapses in the htm. the crossbar for the sp avoids dark spots caused by unutilized crossbar regions and supports rapid on-chip training within 2 clock cycles. this research also leverages plasticity mechanisms such as neurogenesis and homeostatic intrinsic plasticity to strengthen the robustness and performance of the sp. the proposed design is benchmarked for image recognition tasks using mnist and yale faces datasets, and is evaluated using different metrics including entropy, sparseness, and noise robustness. detailed power analysis at different stages of the sp operations is performed to demonstrate the suitability for mobile platforms. |
on mutual information estimation for mixed-pair random variables | we study the mutual information estimation for mixed-pair random variables. one random variable is discrete and the other one is continuous. we develop a kernel method to estimate the mutual information between the two random variables. the estimates enjoy a central limit theorem under some regular conditions on the distributions. the theoretical results are demonstrated by simulation study. |
asymptotic comparison of two-stage selection procedures under quasi-bayesian framework | this paper revisits the procedures suggested by dudewicz and dalal (1975) and rinott (1978) which are designed for selecting the population with the highest mean among independent gaussian populations with unknown and possibly different variances. in a previous paper jacobovic and zuk (2017) made a conjecture that the relative asymptotic efficiency of these procedures equals to the ratio of two certain sequences. this work suggests a quasi-bayesian modelling of the problem under which this conjecture is valid. in addition, this paper motivates an open question regarding the extreme value distribution of the maxima of triangular array of independent student-t random variables with an increasing number of degrees of freedom. |
off-the-grid model based deep learning (o-modl) | we introduce a model based off-the-grid image reconstruction algorithm using deep learned priors. the main difference of the proposed scheme with current deep learning strategies is the learning of non-linear annihilation relations in fourier space. we rely on a model based framework, which allows us to use a significantly smaller deep network, compared to direct approaches that also learn how to invert the forward model. preliminary comparisons against image domain modl approach demonstrates the potential of the off-the-grid formulation. the main benefit of the proposed scheme compared to structured low-rank methods is the quite significant reduction in computational complexity. |
semiparametric estimation for the transformation model with length-biased data and covariate measurement error | analysis of survival data with biased samples caused by left-truncation or length-biased sampling has received extensive interest. many inference methods have been developed for various survival models. these methods, however, break down when survival data are typically error-contaminated. although error-prone survival data commonly arise in practice, little work has been available in the literature for handling length-biased data with measurement error. in survival analysis, the transformation model is one of the frequently used models. however, methods of analyzing the transformation model with those complex features have not been fully explored. in this paper, we study this important problem and develop a valid inference method under the transformation model. we establish asymptotic results for the proposed estimators. the proposed method enjoys appealing features in that there is no need to specify the distribution of the covariates and the increasing function in the transformation model. numerical studies are reported to assess the performance of the proposed method. |
evaluating generative adversarial networks on explicitly parameterized distributions | the true distribution parameterizations of commonly used image datasets are inaccessible. rather than designing metrics for feature spaces with unknown characteristics, we propose to measure gan performance by evaluating on explicitly parameterized, synthetic data distributions. as a case study, we examine the performance of 16 gan variants on six multivariate distributions of varying dimensionalities and training set sizes. in this learning environment, we observe that: gans exhibit similar performance trends across dimensionalities; learning depends on the underlying distribution and its complexity; the number of training samples can have a large impact on performance; evaluation and relative comparisons are metric-dependent; diverse sets of hyperparameters can produce a "best" result; and some gans are more robust to hyperparameter changes than others. these observations both corroborate findings of previous gan evaluation studies and make novel contributions regarding the relationship between size, complexity, and gan performance. |
topological constraints on homeomorphic auto-encoding | when doing representation learning on data that lives on a known non-trivial manifold embedded in high dimensional space, it is natural to desire the encoder to be homeomorphic when restricted to the manifold, so that it is bijective and continuous with a continuous inverse. using topological arguments, we show that when the manifold is non-trivial, the encoder must be globally discontinuous and propose a universal, albeit impractical, construction. in addition, we derive necessary constraints which need to be satisfied when designing manifold-specific practical encoders. these are used to analyse candidates for a homeomorphic encoder for the manifold of 3d rotations $so(3)$. |
practical considerations for data collection and management in mobile health micro-randomized trials | there is a growing interest in leveraging the prevalence of mobile technology to improve health by delivering momentary, contextualized interventions to individuals' smartphones. a just-in-time adaptive intervention (jitai) adjusts to an individual's changing state and/or context to provide the right treatment, at the right time, in the right place. micro-randomized trials (mrts) allow for the collection of data which aid in the construction of an optimized jitai by sequentially randomizing participants to different treatment options at each of many decision points throughout the study. often, this data is collected passively using a mobile phone. to assess the causal effect of treatment on a near-term outcome, care must be taken when designing the data collection system to ensure it is of appropriately high quality. here, we make several recommendations for collecting and managing data from an mrt. we provide advice on selecting which features to collect and when, choosing between "agents" to implement randomization, identifying sources of missing data, and overcoming other novel challenges. the recommendations are informed by our experience with heartsteps, an mrt designed to test the effects of an intervention aimed at increasing physical activity in sedentary adults. we also provide a checklist which can be used in designing a data collection system so that scientists can focus more on their questions of interest, and less on cleaning data. |
classification of radiology reports by modality and anatomy: a comparative study | data labeling is currently a time-consuming task that often requires expert knowledge. in research settings, the availability of correctly labeled data is crucial to ensure that model predictions are accurate and useful. we propose relatively simple machine learning-based models that achieve high performance metrics in the binary and multiclass classification of radiology reports. we compare the performance of these algorithms to that of a data-driven approach based on nlp, and find that the logistic regression classifier outperforms all other models, in both the binary and multiclass classification tasks. we then choose the logistic regression binary classifier to predict chest x-ray (cxr)/ non-chest x-ray (non-cxr) labels in reports from different datasets, unseen during any training phase of any of the models. even in unseen report collections, the binary logistic regression classifier achieves average precision values of above 0.9. based on the regression coefficient values, we also identify frequent tokens in cxr and non-cxr reports that are features with possibly high predictive power. |
hypergraph clustering: a modularity maximization approach | clustering on hypergraphs has been garnering increased attention with potential applications in network analysis, vlsi design and computer vision, among others. in this work, we generalize the framework of modularity maximization for clustering on hypergraphs. to this end, we introduce a hypergraph null model, analogous to the configuration model on undirected graphs, and a node-degree preserving reduction to work with this model. this is used to define a modularity function that can be maximized using the popular and fast louvain algorithm. we additionally propose a refinement over this clustering, by reweighting cut hyperedges in an iterative fashion. the efficacy and efficiency of our methods are demonstrated on several real-world datasets. |
a framework for automated pop-song melody generation with piano accompaniment arrangement | we contribute a pop-song automation framework for lead melody generation and accompaniment arrangement. the framework reflects the major procedures of human music composition, generating both lead melody and piano accompaniment by a unified strategy. specifically, we take chord progression as an input and propose three models to generate a structured melody with piano accompaniment textures. first, the harmony alternation model transforms a raw input chord progression to an altered one to better fit the specified music style. second, the melody generation model generates the lead melody and other voices (melody lines) of the accompaniment using seasonal arma (autoregressive moving average) processes. third, the melody integration model integrates melody lines (voices) together as the final piano accompaniment. we evaluate the proposed framework using subjective listening tests. experimental results show that the generated melodies are rated significantly higher than the ones generated by bi-directional lstm, and our accompaniment arrangement result is comparable with a state-of-the-art commercial software, band in a box. |
rerandomization in $2^k$ factorial experiments | with many pretreatment covariates and treatment factors, the classical factorial experiment often fails to balance covariates across multiple factorial effects simultaneously. therefore, it is intuitive to restrict the randomization of the treatment factors to satisfy certain covariate balance criteria, possibly conforming to the tiers of factorial effects and covariates based on their relative importances. this is rerandomization in factorial experiments. we study the asymptotic properties of this experimental design under the randomization inference framework without imposing any distributional or modeling assumptions of the covariates and outcomes. we derive the joint asymptotic sampling distribution of the usual estimators of the factorial effects, and show that it is symmetric, unimodal, and more "concentrated" at the true factorial effects under rerandomization than under the classical factorial experiment. we quantify this advantage of rerandomization using the notions of "central convex unimodality" and "peakedness" of the joint asymptotic sampling distribution. we also construct conservative large-sample confidence sets for the factorial effects. |
improving the interpretability of deep neural networks with knowledge distillation | deep neural networks have achieved huge success at a wide spectrum of applications from language modeling, computer vision to speech recognition. however, nowadays, good performance alone is not sufficient to satisfy the needs of practical deployment where interpretability is demanded for cases involving ethics and mission critical applications. the complex models of deep neural networks make it hard to understand and reason the predictions, which hinders its further progress. to tackle this problem, we apply the knowledge distillation technique to distill deep neural networks into decision trees in order to attain good performance and interpretability simultaneously. we formulate the problem at hand as a multi-output regression problem and the experiments demonstrate that the student model achieves significantly better accuracy performance (about 1\% to 5\%) than vanilla decision trees at the same level of tree depth. the experiments are implemented on the tensorflow platform to make it scalable to big datasets. to the best of our knowledge, we are the first to distill deep neural networks into vanilla decision trees on multi-class datasets. |
a variational topological neural model for cascade-based diffusion in networks | many works have been proposed in the literature to capture the dynamics of diffusion in networks. while some of them define graphical markovian models to extract temporal relationships between node infections in networks, others consider diffusion episodes as sequences of infections via recurrent neural models. in this paper we propose a model at the crossroads of these two extremes, which embeds the history of diffusion in infected nodes as hidden continuous states. depending on the trajectory followed by the content before reaching a given node, the distribution of influence probabilities may vary. however, content trajectories are usually hidden in the data, which induces challenging learning problems. we propose a topological recurrent neural model which exhibits good experimental performances for diffusion modelling and prediction. |
top-gan: label-free cancer cell classification using deep learning with a small training set | we propose a new deep learning approach for medical imaging that copes with the problem of a small training set, the main bottleneck of deep learning, and apply it for classification of healthy and cancer cells acquired by quantitative phase imaging. the proposed method, called transferring of pre-trained generative adversarial network (top-gan), is a hybridization between transfer learning and generative adversarial networks (gans). healthy cells and cancer cells of different metastatic potential have been imaged by low-coherence off-axis holography. after the acquisition, the optical path delay maps of the cells have been extracted and directly used as an input to the deep networks. in order to cope with the small number of classified images, we have used gans to train a large number of unclassified images from another cell type (sperm cells). after this preliminary training, and after transforming the last layer of the network with new ones, we have designed an automatic classifier for the correct cell type (healthy/primary cancer/metastatic cancer) with 90-99% accuracy, although small training sets of down to several images have been used. these results are better in comparison to other classic methods that aim at coping with the same problem of a small training set. we believe that our approach makes the combination of holographic microscopy and deep learning networks more accessible to the medical field by enabling a rapid, automatic and accurate classification in stain-free imaging flow cytometry. furthermore, our approach is expected to be applicable to many other medical image classification tasks, suffering from a small training set. |
hybrid wasserstein distance and fast distribution clustering | we define a modified wasserstein distance for distribution clustering which inherits many of the properties of the wasserstein distance but which can be estimated easily and computed quickly. the modified distance is the sum of two terms. the first term --- which has a closed form --- measures the location-scale differences between the distributions. the second term is an approximation that measures the remaining distance after accounting for location-scale differences. we consider several forms of approximation with our main emphasis being a tangent space approximation that can be estimated using nonparametric regression. we evaluate the strengths and weaknesses of this approach on simulated and real examples. |
vector field-based simulation of tree-like non-stationary geostatistical models | in this work, a new non-stationary multiple point geostatistical algorithm called vector field-based simulation is proposed. the motivation behind this work is the modeling of a certain structures that exhibit directional features with branching, like a tree, as can be frequently found in fan deltas or turbidity channels. from an image construction approach, the main idea of this work is that instead of using the training image as a source of patterns, it may be used to create a new object called a training vector field (tvf). this object assigns a vector to each point in the reservoir within the training image. the vector represents the direction in which the reservoir develops. the tvf is defined as an approximation of the tangent line at each point in the contour curve of the reservoir. this vector field has a great potential to better capture the non-stationary nature of the training image since the vector not only gives information about the point where it was defined but naturally captures the local trend near that point. |
deep ptych: subsampled fourier ptychography using generative priors | this paper proposes a novel framework to regularize the highly ill-posed and non-linear fourier ptychography problem using generative models. we demonstrate experimentally that our proposed algorithm, deep ptych, outperforms the existing fourier ptychography techniques, in terms of quality of reconstruction and robustness against noise, using far fewer samples. we further modify the proposed approach to allow the generative model to explore solutions outside the range, leading to improved performance. |
multi-resolution neural networks for tracking seismic horizons from few training images | detecting a specific horizon in seismic images is a valuable tool for geological interpretation. because hand-picking the locations of the horizon is a time-consuming process, automated computational methods were developed starting three decades ago. older techniques for such picking include interpolation of control points however, in recent years neural networks have been used for this task. until now, most networks trained on small patches from larger images. this limits the networks ability to learn from large-scale geologic structures. moreover, currently available networks and training strategies require label patches that have full and continuous annotations, which are also time-consuming to generate. we propose a projected loss-function for training convolutional networks with a multi-resolution structure, including variants of the u-net. our networks learn from a small number of large seismic images without creating patches. the projected loss-function enables training on labels with just a few annotated pixels and has no issue with the other unknown label pixels. training uses all data without reserving some for validation. only the labels are split into training/testing. contrary to other work on horizon tracking, we train the network to perform non-linear regression, and not classification. as such, we propose labels as the convolution of a gaussian kernel and the known horizon locations that indicate uncertainty in the labels. the network output is the probability of the horizon location. we demonstrate the proposed computational ingredients on two different datasets, for horizon extrapolation and interpolation. we show that the predictions of our methodology are accurate even in areas far from known horizon locations because our learning strategy exploits all data in large seismic images. |
scalable gam using sparse variational gaussian processes | generalized additive models (gams) are a widely used class of models of interest to statisticians as they provide a flexible way to design interpretable models of data beyond linear models. we here propose a scalable and well-calibrated bayesian treatment of gams using gaussian processes (gps) and leveraging recent advances in variational inference. we use sparse gps to represent each component and exploit the additive structure of the model to efficiently represent a gaussian a posteriori coupling between the components. |
application of robust estimators in shewhart s-charts | maintaining the quality of manufactured products at a desired level is known to increase customer satisfaction and profitability. shewhart control chart is the most widely used in statistical process control (spc) technique to monitor the quality of products and control process variability. based on the assumption of independent and normally distributed data sets, sample mean and standard deviation statistics are known to be the most efficient conventional estimators to determine the process location and scale, respectively. on the other hand, there is not guarantee that the real-world process data would be normally distributed: outliers may exist, and/or sampled population may be contaminated. in such cases, efficiency of the conventional estimators is significantly reduced, and power of the shewhart charts may be undesirably low, e.g. occasional outliers in the rational subgroups (phase i dataset) may drastically affect the sample mean and standard deviation, resulting a serious delay in detection of inferior products (phase ii procedure). for more efficient analyses, it is required to use robust estimators against contaminations. consequently, it is determined that robust estimators are more efficient both against diffuse localized and symmetric-asymmetric contaminations, and have higher power in detecting disturbances, compared to conventional methods. |
consistency of interpolation with laplace kernels is a high-dimensional phenomenon | we show that minimum-norm interpolation in the reproducing kernel hilbert space corresponding to the laplace kernel is not consistent if input dimension is constant. the lower bound holds for any choice of kernel bandwidth, even if selected based on data. the result supports the empirical observation that minimum-norm interpolation (that is, exact fit to training data) in rkhs generalizes well for some high-dimensional datasets, but not for low-dimensional ones. |
drug cell line interaction prediction | understanding the phenotypic drug response on cancer cell lines plays a vital rule in anti-cancer drug discovery and re-purposing. the genomics of drug sensitivity in cancer (gdsc) database provides open data for researchers in phenotypic screening to test their models and methods. previously, most research in these areas starts from the fingerprints or features of drugs, instead of their structures. in this paper, we introduce a model for phenotypic screening, which is called twin convolutional neural network for drugs in smiles format (tcnns). tcnns is comprised of cnn input channels for drugs in smiles format and cancer cell lines respectively. our model achieves $0.84$ for the coefficient of determinant($r^2$) and $0.92$ for pearson correlation($r_p$), which are significantly better than previous works\cite{ammad2014integrative,haider2015copula,menden2013machine}. besides these statistical metrics, tcnns also provides some insights into phenotypic screening. |
machine learning enables polymer cloud-point engineering via inverse design | inverse design is an outstanding challenge in disordered systems with multiple length scales such as polymers, particularly when designing polymers with desired phase behavior. we demonstrate high-accuracy tuning of poly(2-oxazoline) cloud point via machine learning. with a design space of four repeating units and a range of molecular masses, we achieve an accuracy of 4 {\deg}c root mean squared error (rmse) in a temperature range of 24-90 {\deg}c, employing gradient boosting with decision trees. the rmse is >3x better than linear and polynomial regression. we perform inverse design via particle-swarm optimization, predicting and synthesizing 17 polymers with constrained design at 4 target cloud points from 37 to 80 {\deg}c. our approach challenges the status quo in polymer design with a machine learning algorithm, that is capable of fast and systematic discovery of new polymers. |
autoencoder based residual deep networks for robust regression prediction and spatiotemporal estimation | to have a superior generalization, a deep learning neural network often involves a large size of training sample. with increase of hidden layers in order to increase learning ability, neural network has potential degradation in accuracy. both could seriously limit applicability of deep learning in some domains particularly involving predictions of continuous variables with a small size of samples. inspired by residual convolutional neural network in computer vision and recent findings of crucial shortcuts in the brains in neuroscience, we propose an autoencoder-based residual deep network for robust prediction. in a nested way, we leverage shortcut connections to implement residual mapping with a balanced structure for efficient propagation of error signals. the novel method is demonstrated by multiple datasets, imputation of high spatiotemporal resolution non-randomness missing values of aerosol optical depth, and spatiotemporal estimation of fine particulate matter <2.5 \mu m, achieving the cutting edge of accuracy and efficiency. our approach is also a general-purpose regression learner to be applicable in diverse domains. |
non-asymptotic chernoff lower bound and its application to community detection in stochastic block model | chernoff coefficient is an upper bound of bayes error probability in classification problem. in this paper, we will develop sharp chernoff type bound on bayes error probability. the new bound is not only an upper bound but also a lower bound of bayes error probability up to a constant in a non-asymptotic setting. moreover, we will apply this result to community detection in stochastic block model. as a clustering problem, the optimal error rate of community detection can be characterized by our chernoff type bound. this can be formalized by deriving a minimax error rate over certain class of parameter space, then achieving such error rate by a feasible algorithm employ multiple steps of em type updates. |
monocular 3d pose recovery via nonconvex sparsity with theoretical analysis | for recovering 3d object poses from 2d images, a prevalent method is to pre-train an over-complete dictionary $\mathcal d=\{b_i\}_i^d$ of 3d basis poses. during testing, the detected 2d pose $y$ is matched to dictionary by $y \approx \sum_i m_i b_i$ where $\{m_i\}_i^d=\{c_i \pi r_i\}$, by estimating the rotation $r_i$, projection $\pi$ and sparse combination coefficients $c \in \mathbb r_{+}^d$. in this paper, we propose non-convex regularization $h(c)$ to learn coefficients $c$, including novel leaky capped $\ell_1$-norm regularization (lcnr), \begin{align*} h(c)=\alpha \sum_{i } \min(|c_i|,\tau)+ \beta \sum_{i } \max(| c_i|,\tau), \end{align*} where $0\leq \beta \leq \alpha$ and $0<\tau$ is a certain threshold, so the invalid components smaller than $\tau$ are composed with larger regularization and other valid components with smaller regularization. we propose a multi-stage optimizer with convex relaxation and admm. we prove that the estimation error $\mathcal l(l)$ decays w.r.t. the stages $l$, \begin{align*} pr\left(\mathcal l(l) < \rho^{l-1} \mathcal l(0) + \delta \right) \geq 1- \epsilon, \end{align*} where $0< \rho <1, 0<\delta, 0<\epsilon \ll 1$. experiments on large 3d human datasets like h36m are conducted to support our improvement upon previous approaches. to the best of our knowledge, this is the first theoretical analysis in this line of research, to understand how the recovery error is affected by fundamental factors, e.g. dictionary size, observation noises, optimization times. we characterize the trade-off between speed and accuracy towards real-time inference in applications. |
advanced methodology for uncertainty propagation in computer experiments with large number of inputs | in the framework of the estimation of safety margins in nuclear accident analysis, a quantitative assessment of the uncertainties tainting the results of computer simulations is essential. accurate uncertainty propagation (estimation of high probabilities or quantiles) and quantitative sensitivity analysis may call for several thousand of code simulations. complex computer codes, as the ones used in thermal-hydraulic accident scenario simulations, are often too cpu-time expensive to be directly used to perform these studies. a solution consists in replacing the computer model by a cpu inexpensive mathematical function, called a metamodel, built from a reduced number of code simulations. however, in case of high dimensional experiments (with typically several tens of inputs), the metamodel building process remains difficult. to face this limitation, we propose a methodology which combines several advanced statistical tools: initial space-filling design, screening to identify the non-influential inputs, gaussian process (gp) metamodel building with the group of influential inputs as explanatory variables. the residual effect of the group of non-influential inputs is captured by another gp metamodel. then, the resulting joint gp metamodel is used to accurately estimate sobol' sensitivity indices and high quantiles (here $95\%$-quantile).the efficiency of the methodology to deal with a large number of inputs and reduce the calculation budget is illustrated on a thermal-hydraulic calculation case simulating with the cathare2 code a loss of coolant accident scenario in a pressurized water reactor. a predictive gp metamodel is built with only a few hundred of code simulations and allows the calculation of the sobol' sensitivity indices. this gp also provides a more accurate estimation of the 95%-quantile and associated confidence interval than the empirical approach, at equal calculation budget. moreover, on this test case, the joint gp approach outperforms the simple gp. |
on the construction of knockoffs in case-control studies | consider a case-control study in which we have a random sample, constructed in such a way that the proportion of cases in our sample is different from that in the general population---for instance, the sample is constructed to achieve a fixed ratio of cases to controls. imagine that we wish to determine which of the potentially many covariates under study truly influence the response by applying the new model-x knockoffs approach. this paper demonstrates that it suffices to design knockoff variables using data that may have a different ratio of cases to controls. for example, the knockoff variables can be constructed using the distribution of the original variables under any of the following scenarios: (1) a population of controls only; (2) a population of cases only; (3) a population of cases and controls mixed in an arbitrary proportion (irrespective of the fraction of cases in the sample at hand). the consequence is that knockoff variables may be constructed using unlabeled data, which is often available more easily than labeled data, while maintaining type-i error guarantees. |
brain mri super-resolution using 3d generative adversarial networks | in this work we propose an adversarial learning approach to generate high resolution mri scans from low resolution images. the architecture, based on the srgan model, adopts 3d convolutions to exploit volumetric information. for the discriminator, the adversarial loss uses least squares in order to stabilize the training. for the generator, the loss function is a combination of a least squares adversarial loss and a content term based on mean square error and image gradients in order to improve the quality of the generated images. we explore different solutions for the upsampling phase. we present promising results that improve classical interpolation, showing the potential of the approach for 3d medical imaging super-resolution. source code available at https://github.com/imatge-upc/3d-gan-superresolution |
multivariate arrival times with recurrent neural networks for personalized demand forecasting | access to a large variety of data across a massive population has made it possible to predict customer purchase patterns and responses to marketing campaigns. in particular, accurate demand forecasts for popular products with frequent repeat purchases are essential since these products are one of the main drivers of profits. however, buyer purchase patterns are extremely diverse and sparse on a per-product level due to population heterogeneity as well as dependence in purchase patterns across product categories. traditional methods in survival analysis have proven effective in dealing with censored data by assuming parametric distributions on inter-arrival times. distributional parameters are then fitted, typically in a regression framework. on the other hand, neural-network based models take a non-parametric approach to learn relations from a larger functional class. however, the lack of distributional assumptions make it difficult to model partially observed data. in this paper, we model directly the inter-arrival times as well as the partially observed information at each time step in a survival-based approach using recurrent neural networks (rnn) to model purchase times jointly over several products. instead of predicting a point estimate for inter-arrival times, the rnn outputs parameters that define a distributional estimate. the loss function is the negative log-likelihood of these parameters given partially observed data. this approach allows one to leverage both fully observed data as well as partial information. by externalizing the censoring problem through a log-likelihood loss function, we show that substantial improvements over state-of-the-art machine learning methods can be achieved. we present experimental results based on two open datasets as well as a study on a real dataset from a large retailer. |
machine learning in resting-state fmri analysis | machine learning techniques have gained prominence for the analysis of resting-state functional magnetic resonance imaging (rs-fmri) data. here, we present an overview of various unsupervised and supervised machine learning applications to rs-fmri. we present a methodical taxonomy of machine learning methods in resting-state fmri. we identify three major divisions of unsupervised learning methods with regard to their applications to rs-fmri, based on whether they discover principal modes of variation across space, time or population. next, we survey the algorithms and rs-fmri feature representations that have driven the success of supervised subject-level predictions. the goal is to provide a high-level overview of the burgeoning field of rs-fmri from the perspective of machine learning applications. |
on cross-validation for sparse reduced rank regression | in high-dimensional data analysis, regularization methods pursuing sparsity and/or low rank have received a lot of attention recently. to provide a proper amount of shrinkage, it is typical to use a grid search and a model comparison criterion to find the optimal regularization parameters. however, we show that fixing the parameters across all folds may result in an inconsistency issue, and it is more appropriate to cross-validate projection-selection patterns to obtain the best coefficient estimate. our in-sample error studies in jointly sparse and rank-deficient models lead to a new class of information criteria with four scale-free forms to bypass the estimation of the noise level. by use of an identity, we propose a novel scale-free calibration to help cross-validation achieve the minimax optimal error rate non-asymptotically. experiments support the efficacy of the proposed methods. |
monte-carlo sampling applied to multiple instance learning for histological image classification | we propose a patch sampling strategy based on a sequential monte-carlo method for high resolution image classification in the context of multiple instance learning. when compared with grid sampling and uniform sampling techniques, it achieves higher generalization performance. we validate the strategy on two artificial datasets and two histological datasets for breast cancer and sun exposure classification. |
cascaded v-net using roi masks for brain tumor segmentation | in this work we approach the brain tumor segmentation problem with a cascade of two cnns inspired in the v-net architecture \cite{vnet}, reformulating residual connections and making use of roi masks to constrain the networks to train only on relevant voxels. this architecture allows dense training on problems with highly skewed class distributions, such as brain tumor segmentation, by focusing training only on the vecinity of the tumor area. we report results on brats2017 training and validation sets. |
a geometric theory of higher-order automatic differentiation | first-order automatic differentiation is a ubiquitous tool across statistics, machine learning, and computer science. higher-order implementations of automatic differentiation, however, have yet to realize the same utility. in this paper i derive a comprehensive, differential geometric treatment of automatic differentiation that naturally identifies the higher-order differential operators amenable to automatic differentiation as well as explicit procedures that provide a scaffolding for high-performance implementations. |
using machine learning for handover optimization in vehicular fog computing | smart mobility management would be an important prerequisite for future fog computing systems. in this research, we propose a learning-based handover optimization for the internet of vehicles that would assist the smooth transition of device connections and offloaded tasks between fog nodes. to accomplish this, we make use of machine learning algorithms to learn from vehicle interactions with fog nodes. our approach uses a three-layer feed-forward neural network to predict the correct fog node at a given location and time with 99.2 % accuracy on a test set. we also implement a dual stacked recurrent neural network (rnn) with long short-term memory (lstm) cells capable of learning the latency, or cost, associated with these service requests. we create a simulation in jamscript using a dataset of real-world vehicle movements to create a dataset to train these networks. we further propose the use of this predictive system in a smarter request routing mechanism to minimize the service interruption during handovers between fog nodes and to anticipate areas of low coverage through a series of experiments and test the models' performance on a test set. |
predicting aircraft trajectories: a deep generative convolutional recurrent neural networks approach | reliable 4d aircraft trajectory prediction, whether in a real-time setting or for analysis of counterfactuals, is important to the efficiency of the aviation system. toward this end, we first propose a highly generalizable efficient tree-based matching algorithm to construct image-like feature maps from high-fidelity meteorological datasets - wind, temperature and convective weather. we then model the track points on trajectories as conditional gaussian mixtures with parameters to be learned from our proposed deep generative model, which is an end-to-end convolutional recurrent neural network that consists of a long short-term memory (lstm) encoder network and a mixture density lstm decoder network. the encoder network embeds last-filed flight plan information into fixed-size hidden state variables and feeds the decoder network, which further learns the spatiotemporal correlations from the historical flight tracks and outputs the parameters of gaussian mixtures. convolutional layers are integrated into the pipeline to learn representations from the high-dimension weather features. during the inference process, beam search, adaptive kalman filter, and rauch-tung-striebel smoother algorithms are used to prune the variance of generated trajectories. |
a multivariate spatial skew-t process for joint modeling of extreme precipitation indexes | to study trends in extreme precipitation across us over the years 1951-2017, we consider 10 climate indexes that represent extreme precipitation, such as annual maximum of daily precipitation, annual maximum of consecutive 5-day average precipitation, which exhibit spatial correlation as well as mutual dependence. we consider the gridded data, produced by the climdex project (http://www.climdex.org/gewocs.html), constructed using daily precipitation data. in this paper, we propose a multivariate spatial skew-t process for joint modeling of extreme precipitation indexes and discuss its theoretical properties. the model framework allows bayesian inference while maintaining a computational time that is competitive with common multivariate geostatistical approaches. in a numerical study, we find that the proposed model outperforms multivariate spatial gaussian processes, multivariate spatial t-processes including their univariate alternatives in terms of various model selection criteria. we apply the proposed model to estimate the average decadal change in the extreme precipitation indexes throughout the united states and find several significant local changes. |
per-tensor fixed-point quantization of the back-propagation algorithm | the high computational and parameter complexity of neural networks makes their training very slow and difficult to deploy on energy and storage-constrained computing systems. many network complexity reduction techniques have been proposed including fixed-point implementation. however, a systematic approach for designing full fixed-point training and inference of deep neural networks remains elusive. we describe a precision assignment methodology for neural network training in which all network parameters, i.e., activations and weights in the feedforward path, gradients and weight accumulators in the feedback path, are assigned close to minimal precision. the precision assignment is derived analytically and enables tracking the convergence behavior of the full precision training, known to converge a priori. thus, our work leads to a systematic methodology of determining suitable precision for fixed-point training. the near optimality (minimality) of the resulting precision assignment is validated empirically for four networks on the cifar-10, cifar-100, and svhn datasets. the complexity reduction arising from our approach is compared with other fixed-point neural network designs. |
approximate inference for multiplicative latent force models | latent force models are a class of hybrid models for dynamic systems, combining simple mechanistic models with flexible gaussian process (gp) perturbations. an extension of this framework to include multiplicative interactions between the state and gp terms allows strong a priori control of the model geometry at the expense of tractable inference. in this paper we consider two methods of carrying out inference within this broader class of models. the first is based on an adaptive gradient matching approximation, and the second is constructed around mixtures of local approximations to the solution. we compare the performance of both methods on simulated data, and also demonstrate an application of the multiplicative latent force model on motion capture data. |
modeling frequency and severity of claims with the zero-inflated generalized cluster-weighted models | in this paper, we propose two important extensions to cluster-weighted models (cwms). first, we extend cwms to have generalized cluster-weighted models (gcwms) by allowing modeling of non-gaussian distribution of the continuous covariates, as they frequently occur in insurance practice. secondly, we introduce a zero-inflated extension of gcwm (zi-gcwm) for modeling insurance claims data with excess zeros coming from heterogenous sources. additionally, we give two expectation-optimization (em) algorithms for parameter estimation given the proposed models. an appropriate simulation study shows that, for various settings and in contrast to the existing mixture-based approaches, both extended models perform well. finally, a real data set based on french auto-mobile policies is used to illustrate the application of the proposed extensions. |
latent variable modeling for generative concept representations and deep generative models | latent representations are the essence of deep generative models and determine their usefulness and power. for latent representations to be useful as generative concept representations, their latent space must support latent space interpolation, attribute vectors and concept vectors, among other things. we investigate and discuss latent variable modeling, including latent variable models, latent representations and latent spaces, particularly hierarchical latent representations and latent space vectors and geometry. our focus is on that used in variational autoencoders and generative adversarial networks. |
how did donald trump surprisingly win the 2016 united states presidential election? an information-theoretic perspective (clean sensing for big data analytics:optimal strategies,estimation error bounds tighter than the cram\'{e}r-rao bound) | donald trump was lagging behind in nearly all opinion polls leading up to the 2016 us presidential election, but he surprisingly won the election. this raises the following important questions: 1) why most opinion polls were not accurate in 2016? and 2) how to improve the accuracies of opinion polls? in this paper, we study the inaccuracies of opinion polls in the 2016 election through the lens of information theory. we first propose a general framework of parameter estimation, called clean sensing (polling), which performs optimal parameter estimation with sensing cost constraints, from heterogeneous and potentially distorted data sources. we then cast the opinion polling as a problem of parameter estimation from potentially distorted heterogeneous data sources, and derive the optimal polling strategy using heterogenous and possibly distorted data under cost constraints. our results show that a larger number of data samples do not necessarily lead to better polling accuracy, which give a possible explanation of the inaccuracies of opinion polls in 2016. the optimal sensing strategy should instead optimally allocate sensing resources over heterogenous data sources according to several factors including data quality, and, moreover, for a particular data source, it should strike an optimal balance between the quality of data samples, and the quantity of data samples. as a byproduct of this research, in a general setting, we derive a group of new lower bounds on the mean-squared errors of general unbiased and biased parameter estimators. these new lower bounds can be tighter than the classical cram\'{e}r-rao bound (crb) and chapman-robbins bound. our derivations are via studying the lagrange dual problems of certain convex programs. the classical cram\'{e}r-rao bound and chapman-robbins bound follow naturally from our results for special cases of these convex programs. |
whittemore: an embedded domain specific language for causal programming | this paper introduces whittemore, a language for causal programming. causal programming is based on the theory of structural causal models and consists of two primary operations: identification, which finds formulas that compute causal queries, and estimation, which applies formulas to transform probability distributions to other probability distribution. causal programming provides abstractions to declare models, queries, and distributions with syntax similar to standard mathematical notation, and conducts rigorous causal inference, without requiring detailed knowledge of the underlying algorithms. examples of causal inference with real data are provided, along with discussion of the implementation and possibilities for future extension. |
tied hidden factors in neural networks for end-to-end speaker recognition | in this paper we propose a method to model speaker and session variability and able to generate likelihood ratios using neural networks in an end-to-end phrase dependent speaker verification system. as in joint factor analysis, the model uses tied hidden variables to model speaker and session variability and a map adaptation of some of the parameters of the model. in the training procedure our method jointly estimates the network parameters and the values of the speaker and channel hidden variables. this is done in a two-step backpropagation algorithm, first the network weights and factor loading matrices are updated and then the hidden variables, whose gradients are calculated by aggregating the corresponding speaker or session frames, since these hidden variables are tied. the last layer of the network is defined as a linear regression probabilistic model whose inputs are the previous layer outputs. this choice has the advantage that it produces likelihoods and additionally it can be adapted during the enrolment using map without the need of a gradient optimization. the decisions are made based on the ratio of the output likelihoods of two neural network models, speaker adapted and universal background model. the method was evaluated on the rsr2015 database. |
gray-box adversarial testing for control systems with machine learning component | neural networks (nn) have been proposed in the past as an effective means for both modeling and control of systems with very complex dynamics. however, despite the extensive research, nn-based controllers have not been adopted by the industry for safety critical systems. the primary reason is that systems with learning based controllers are notoriously hard to test and verify. even harder is the analysis of such systems against system-level specifications. in this paper, we provide a gradient based method for searching the input space of a closed-loop control system in order to find adversarial samples against some system-level requirements. our experimental results show that combined with randomized search, our method outperforms simulated annealing optimization. |
semiparametric estimation for cure survival model with left-truncated and right-censored data and covariate measurement error | in this paper, we mainly discuss the cure model with survival data. different from the usual survival data with right-censoring, we incorporate the features of left-truncation and measurement error in covariates. generally speaking, left-truncation causes a biased sample in survival analysis; measurement error in covariates may incur a tremendous bias if we do not deal with it properly. to deal with these challenges, we propose a flexible way to analyze left-truncated survival data and correct measurement error in covariates. the theoretical results are also established in this paper. |