Datasets:
Tasks:
Text Classification
Modalities:
Text
Sub-tasks:
multi-class-classification
Languages:
English
Size:
10K - 100K
ArXiv:
License:
File size: 70,008 Bytes
dceab29 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 |
Paper title,Paper link,Impact statement,Label,ID
Auto-Panoptic: Cooperative Multi-Component Architecture Search for Panoptic Segmentation,https://proceedings.neurips.cc/paper/2020/file/ec1f764517b7ffb52057af6df18142b7-Paper.pdf,"This work makes the first attempt to search for all key components of panoptic pipeline and manages to accomplish this via the proposed Cooperative Multi-Component Architecture Search and efficient Path-Priority Search Policy. Most related work in the literature of NAS for fine-grained vision tasks concentrates on searching a specific part of the network and the balance of the overall network is largely ignored. Nevertheless, this type of technology is essential to improve the upper bound of popular detectors and segmentation networks. This may inspire new work towards the efficient search of the overall architecture for fine-grained vision tasks, e.g., object detection, semantic segmentation, panoptic segmentation and so on. We are not aware of any imminent risks of placing anyone at a disadvantage. In the future, more constraints and optimization algorithms can be applied to strike the optimal trade-off between accuracy and latency to deliver customized architecture for different platforms and devices.",doesn't mention a harmful application,0
Design Space for Graph Neural Networks,https://proceedings.neurips.cc/paper/2020/file/c5c3d4fe6b2cc463c7d7ecba17cc9de7-Paper.pdf,"Impact on GNN research . Our work brings in many valuable mindsets to the field of GNN research. For example, we fully adopt the principle of controlling model complexity when comparing different models, which is not yet adopted in most GNN papers. We focus on finding guidelines / principles when designing GNNs, rather than particular GNN instantiations. We emphasize that the best GNN designs can drastically differ across tasks (the state-of-the-art GNN model on one task may have poor performance on other tasks). We thus propose to evaluate models on diverse tasks measured by quantitative similarity metric. Rather than criticizing the weakness of existing GNN architectures, our goal is to build a framework that can help researchers understand GNN design choices when developing new models suitable for different applications. Our approach serves as a tool to demonstrate the innovation of a novel GNN model ( e.g. , in what kind of design spaces / task spaces, a proposed algorithmic advancement is helpful), or a novel GNN task ( e.g. , showing that the task is not similar to any existing tasks thus calls for new challenges of algorithmic development). Impact on machine learning research . Our approach is in fact applicable to general machine learning model design. Specifically, we hope the proposed controlled random search technique can assist fair evaluation of novel algorithmic advancements. To show whether a certain algorithmic advancement is useful, it is important to sample random model-task combinations, then investigate in what scenarios the algorithmic advancement indeed improves the performance. Additionally, the proposed task similarity metric can be used to understand similarities between general machine learning tasks, e.g. , classification of MNIST and CIFAR-10. Our ranking-based similarity metric is fully general, as long as different designs can be ranked by their performance. Impact on other research domains . Our framework provides an easier than ever support for experts in other disciplines to solve their problems via GNNs. Domain experts only need to provide properly formatted domain-specific datasets, then recommended GNN designs will be automatically picked and applied to the dataset. In the fastest mode, anchor GNN models will be applied to the novel task in order to measure its similarity with known GNN tasks, where the corresponding best GNN designs have been saved. Top GNN designs in the tasks with high similarity to the novel task will be applied. If computational resources permitted, a full grid search / random search over the design space can also be easily carried out to the new task. We believe this pipeline can significantly lower the barrier for applying GNN models, thus greatly promote the application of GNNs in other research domains. Impact on the society . As is discussed above, given its clarity and accessibility, we are confident that our general approach can inspire novel applications that are of high impact to the society. Additionally, its simplicity can also provide great opportunities for AI education, where students can learn from SOTA deep learning models and inspiring applications at ease.",doesn't mention a harmful application,1
Learning the Geometry of Wave-Based Imaging,https://proceedings.neurips.cc/paper/2020/file/5e98d23afe19a774d1b2dcbefd5103eb-Paper.pdf,"We do not see any major ethical consequences of this work. Our work has implications in the fields of exploratory imaging — earthquake detection, medical imaging etc. Our work improves the quality and reliability of imaging in these fields. Improving these fields has direct societal impact in finding new natural preserves, improved diagnosis in healthcare etc. A failure of our system leaves machine learning unreliable in exploratory imaging. Our method provides strong out-of-distribution generalization and hence is not biased according to the data.",doesn't mention a harmful application,2
Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising,https://proceedings.neurips.cc/paper/2020/file/ea6b2efbdd4255a9f1b3bbc6399b58f4-Paper.pdf,"In this paper, we introduce Noise2Same, a self-supervised framework for deep image denoising. As Noise2Same does not need paired clean data, paired noisy data, nor the noise model, its application scenarios could be much broader than both traditional supervised and existing self-supervised denoising frameworks. The most direct application of Noise2Same is to perform denoising on digital images captured under poor conditions. Individuals and corporations related to photography may benefit from our work. Besides, Noise2Same could be applied as a pre-processing step for computer vision tasks such as object detection and segmentation [ 18], making the downstream algorithms more robust to noisy images. Also, specific research communities could benefit from the development of Noise2Same as well. For example, the capture of high-quality microscopy data of live cells, tissue, or nanomaterials is expensive in terms of budget and time [27]. Proper denoising algorithms allow researchers to obtain high-quality data from low-quality data and hence remove the need to capture high-quality data directly. In addition to image denoising applications, the self-supervised denoising framework could be extended to other domains such as audio noise reduction and single-cell [1]. On the negative aspect, as many imaging-based research tasks and computer vision applications may be built upon the denoising algorithms, the failure of Noise2Same could potentially lead to biases or failures in these tasks and applications.",mentions a harmful application,3
When Counterpoint Meets Chinese Folk Melodies,https://proceedings.neurips.cc/paper/2020/file/bae876e53dab654a3d9d9768b1b7b91a-Paper.pdf,"The idea of integrating Western counterpoint into Chinese folk music generation is innovative. It would make positive broader impacts on three aspects: 1) It would facilitate more opportunities and challenges of music cultural exchanges at a much larger scale through automatic generation. For example, the inter-cultural style fused music could be used in Children’s enlightenment education to stimulate their interest in both cultures. 2) It would further the idea of collaborative counterpoint improvisation between two parts ( e . g ., a human and a machine) to music traditions where such interaction was less common. 3) The computer-generated music may “reshape the musical idiom”[23], which may bring more opportunities and possibilities to produce creative music. The proposed work may also have some potential negative societal impacts: 1) Similar to other computational creativity research, the generated music has the possibility of plagiarism by copying short snippets from the training corpus, even though copyright infringement is not a concern as neither folk melodies nor Bach’s music has copyright. That being said, our online music generation approach conditions music generation on past human and machine generation, and is less likely to directly copy snippets than offline approaches do. 2) The proposed innovative music generation approach may cause disruptions to current music professions, even deprive them of their means of existence[23]. However, it also opens new areas and creates new needs in this we-media era . Overall, we believe that the positive impacts significantly outweigh the negative impacts.",mentions a harmful application,4
Learning from Label Proportions: A Mutual Contamination Framework,https://proceedings.neurips.cc/paper/2020/file/fcde14913c766cf307c75059e0e89af5-Paper.pdf,"LLP has been discussed as a model for summarizing a fully labeled dataset for public dissemination. The idea is that individual labels are not disclosed, so some degree of privacy is retained. As we show, consistent classification is still possible in this setting. If the two class-conditional distributions are nonoverlapping, labels of training instances can be recovered with no uncertainty by an optimal classifier. If the class-conditional distributions have some overlap, training instances in the nonoverlapping region can still be labeled with no uncertainty, while training instances in the overlapping regions can have their labels guessed with some uncertainty, depending on the degree of overlap.",doesn't mention a harmful application,5
Limits to Depth Efficiencies of Self-Attention,https://proceedings.neurips.cc/paper/2020/file/ff4dfdf5904e920ce52b48c1cef97829-Paper.pdf,"Our work aims at providing fundamental guidelines which can assist all fields that employ Transformer-based architectures to use more efficient models. This way, these fields can achieve their goals while consuming less resources. Additionally, this work made an effort to provide a theoretical interpretation by examining the (many) empirical signals already published by others, while providing only a required minimum of further experimentation. This was done under the belief that while experiments are crucial for the advancement of the field, it is important not to conduct them superfluously as they incur an environmental price [Schwartz et al., 2019].",doesn't mention a harmful application,6
Meta-Consolidation for Continual Learning,https://proceedings.neurips.cc/paper/2020/file/a5585a4d4b12277fee5cad0880611bc6-Paper.pdf,"(as required by NeurIPS 2020 CFP) Continual learning is a key desiderata for Artificial General Intelligence (AGI). Hence, this line of research has the benefits as well as the pitfalls of any other research effort geared in this direction. In particular, our work can help deliver impact on making smarter AI products and services, which can learn and update themselves on-the-fly when newer tasks and domains are encountered, without forgetting previously acquired knowledge. This is a necessity in any large-scale deployments of machine learning and computer vision, including in social media, e-commerce, surveillance, e- governance, etc - each of which have newer settings, tasks or domains added continually over time. Any negative effect of our work, such as legal and ethical concerns, are not unique to this work - to the best of our knowledge, but are shared with any other new development in machine learning, in general.",mentions a harmful application,7
Learning to Incentivize Other Learning Agents,https://proceedings.neurips.cc/paper/2020/file/ad7ed5d47b9baceb12045a929e7e2f66-Paper.pdf,"Our work is a step toward the goal of ensuring the common good in a potential future where independent reinforcement learning agents interact with one another and/or with humans in the real world. We have shown that cooperation can emerge by introducing an additional learned incentive function that enables one agent to affect another agent’s reward directly. However, as agents still independently maximize their own individual rewards, it is open as to how to prevent an agent from misusing the incentive function to exploit others. One approach for future research to address this concern is to establish new connections between our work and the emerging literature on reward tampering [11]. By sparking a discussion on this important aspect of multi-agent interaction, we believe our work has a positive impact on the long-term research endeavor that is necessary for RL agents to be deployed safely in real-world applications.",doesn't mention a harmful application,8
An Improved Analysis of (Variance-Reduced) Policy Gradient and Natural Policy Gradient Methods,https://proceedings.neurips.cc/paper/2020/file/56577889b3c1cd083b6d7b32d32f99d5-Paper.pdf,"The results of this paper improves the performance of policy-gradient methods for reinforcement learning, as well as our understanding to the existing methods. Through reinforcement learning, our study will also benefit several research communities such as machine learning and robotics. We do not believe that the results in this work will cause any ethical issue, or put anyone at a disadvantage in our society.",doesn't mention a harmful application,9
Sample-Efficient Reinforcement Learning of Undercomplete POMDPs,https://proceedings.neurips.cc/paper/2020/file/d783823cc6284b929c2cd8df2167d212-Paper.pdf,"As this is a theoretical contribution, we do not envision that our direct results will have a tangible societal impact. Our broader line of inquiry could impact a line of thinking in a way that provides additional means to provide confidence intervals relevant for planning and learning. There is an increasing needs for applications to understand planning under uncertainty in the broader context of safety and reliability, and POMDPs provide one potential framework.",doesn't mention a harmful application,10
Reward-rational (implicit) choice: A unifying formalism for reward learning,https://proceedings.neurips.cc/paper/2020/file/2f10c1578a0706e06b6d7db6f0b4a6af-Paper.pdf,"As AI capability advances, it is becoming increasingly important to align the objectives of AI agents to what people want. From how assistive robots can best help their users, to how autonomous cars should trade off between safety risk and efficiency, to how recommender systems should balance revenue considerations with longer-term user happiness and with avoiding influencing user views, agents cannot rely on a reward function specified once and set in stone. By putting different sources of information about the reward explicitly under the same framework, we hope our paper contributes towards a future in which agents maintain uncertainty over what their reward should be, and use different types of feedback from humans to refine their estimate and become better aligned with what people want over time – be them designers or end-users. On the flip side, changing reward functions also raises its own set of risks and challenges. First, the relationship between designer objectives and end-user objectives is not clear. Our framework can be used to adapt agents to end-users preferences, but this takes away control from the system designers. This might be desirable for, say, home robots, but not for safety-critical systems like autonomous cars, where designers might need to enforce certain constraints a-priori on the reward adaptation process. More broadly, most systems have multiple stake-holders, and what it means to do ethical preference aggregation remains an open problem. Further, if the robot’s model of the human is misspecified, adaptation might lead to more harm than good, with the robot inferring a worse reward function than what a designer could specify by hand.",mentions a harmful application,11
Flows for simultaneous manifold learning and density estimation,https://proceedings.neurips.cc/paper/2020/file/051928341be67dcba03f0e04104d9047-Paper.pdf,"Manifold-learning flows have the potential to improve the efficiency with which scientists extract knowledge from large-scale experiments. Many phenomena have their most accurate description in terms of complex computer simulations which do not admit a tractable likelihood. In this common case, normalizing flows can be trained on synthetic data and used as a surrogate for the likelihood function, enabling high-quality inference on model parameters [21]. When the data have a manifold structure, manifold-learning flows may improve the quality and efficiency of this process further and ultimately contribute to scientific progress. We have demonstrated this with a real-world particle physics dataset, though the same technique is applicable to fields as diverse as neuroscience, systems biology, and epidemiology. All generative models carry a risk of being abused for the generation of fake data that are then masqueraded as real documents. This danger also applies to manifold-learning flows. While manifold-learning flows are currently far away from being able to generate realistic high-resolution images, videos, or audio, this concern should be kept in mind in the long term. Finally, the models we trained on image datasets of human faces clearly lack diversity. They reproduce and reinforce the biases inherent in the training data. Before using such (or other) models in any real-life application, it is crucial to understand, measure, and mitigate such biases.",mentions a harmful application,12
Implicit Neural Representations with Periodic Activation Functions,https://proceedings.neurips.cc/paper/2020/file/53c04118df112c13a8c34b38343b9c10-Paper.pdf,"The proposed SIREN representation enables accurate representations of natural signals, such as images, audio, and video in a deep learning framework. This may be an enabler for downstream tasks involving such signals, such as classification for images or speech-to-text systems for audio. Such applications may be leveraged for both positive and negative ends. SIREN may in the future further enable novel approaches to the generation of such signals. This has potential for misuse in impersonating actors without their consent. For an in-depth discussion of such so-called DeepFakes, we refer the reader to a recent review article on neural rendering [16].",mentions a harmful application,13
Neural Message Passing for Multi-Relational Ordered and Recursive Hypergraphs,https://proceedings.neurips.cc/paper/2020/file/217eedd1ba8c592db97d0dbe54c7adfc-Paper.pdf,"Message Passing Neural Networks (MPNNs) are a framework for deep learning on graph structured data. Graph structures are universal and very generic structures commonly seen in various forms in computer vision, natural language processing, recommender systems, traffic prediction, generative models, and many more. Graphs can have many variations such as multi-relational, heterogeneous, hypergraphs, etc. Our research in this paper unifies several existing MPNN methods on these variations. While we show how our research could be used for academic networks, and factual knowledge, it opens up many more possibilities in natural language processing (NLP). We see opportunities for research applying our work for beneficial puroposes, such as investigating whether we could improve performance of NLP tasks such as machine reading comprehension, relation extraction, machine translation, and many more. Potentially hazardous applications include trying to predict criminality or credit from social networks. Such applications may reproduce and exacerbate bias and readers of the paper should be aware that the presented model should not applied naively to such tasks.",mentions a harmful application,14
COT-GAN: Generating Sequential Data via Causal Optimal Transport,https://proceedings.neurips.cc/paper/2020/file/641d77dd5271fca28764612a028d9c8e-Paper.pdf,"The COT-GAN algorithm introduced in this paper is suitable to generate sequential data, when the real dataset consists of i.i.d. sequences or of stationary time series. It opens up doors to many applications that can benefit from time series synthesis. For example, researchers often do not have access to abundant training data due to privacy concerns, high cost, and data scarcity. This hinders the capability of building accurate predictive models. Ongoing research is aimed at developing a modified COT-GAN algorithm to generate financial time series. The high non-stationarity of financial data requires different features and architectures, whilst causality when measuring distances between sequences remains the crucial tool. The application to market generation is of main interest for the financial and insurance industry, for example in model- independent pricing and hedging, portfolio selection, risk management, and stress testing. In broader scientific research, our approach can be used to estimate from data the parameters of simulation-based models that describe physical processes. These models can be, for instance, differential equations describing neural activities, compartmental models in epidemiology, and chemical reactions involving multiple reagents.",doesn't mention a harmful application,15
Cream of the Crop: Distilling Prioritized Paths For One-Shot Neural Architecture Search,https://proceedings.neurips.cc/paper/2020/file/d072677d210ac4c03ba046120f0802ec-Paper.pdf,"Similar to previous NAS works, this work does not have immediate societal impact, since the algorithm is only designed for image classification, but it can indirectly impact society. As an example, our work may inspire the creation of new algorithms and applications with direct societal implications. Moreover, compared with other NAS methods that require additional teacher model to guide the training process, our method does not need any external teacher models. So our method can be used in a closed data system, ensuring the privacy of user data.",doesn't mention a harmful application,16
Deep Evidential Regression,https://proceedings.neurips.cc/paper/2020/file/aab085461de182608ee9f607f3f7d18f-Paper.pdf,"Uncertainty estimation for neural networks has very significant societal impact. Neural networks are increasingly being trained as black-box predictors and being placed in larger decision systems where errors in their predictions can pose immediate threat to downstream tasks. Systematic methods for calibrated uncertainty estimation under these conditions are needed, especially as these systems are deployed in safety critical domains, such for autonomous vehicle control [29], medical diagnosis [43], or in settings with large dataset imbalances and bias such as crime forecasting [24] and facial recognition [3]. This work is complementary to a large portion of machine learning research which is continually pushing the boundaries on neural network precision and accuracy. Instead of solely optimizing larger models for increased performance, our method focuses on how these models can be equipped with the ability to estimate their own confidence. Our results demonstrating superior calibration of our method over baselines are also critical in ensuring that we can place a certain level of trust in these algorithms and in understanding when they say “I don’t know”. While there are clear and broad benefits of uncertainty estimation in machine learning, we believe it is also important to recognize potential societal challenges that may arise. With increased performance and uncertainty estimation capabilities, humans will inevitably become increasingly trusting in a model’s predictions, as well as its ability to catch dangerous or uncertain decisions before they are executed. Thus, it is important to continue to pursue redundancy in such learning systems to increase the likelihood that mistakes can be caught and corrected independently.",mentions a harmful application,17
The Value Equivalence Principle for Model-Based Reinforcement Learning,https://proceedings.neurips.cc/paper/2020/file/3bb585ea00014b0e3ebe4c6dd165a358-Paper.pdf,"The bulk of the research presented in this paper consists of foundational theoretical results about the learning of models for model-based reinforcement learning agents. While applications of these agents can have social impacts depending upon their use, our results merely serve to illuminate desirable properties of models and facilitate the subsequent training of agents using them. In short, this work is largely theoretical and does not present any foreseeable societal impact, except in the general concerns over progress in artificial intelligence.",doesn't mention a harmful application,18
Graph Policy Network for Transferable Active Learning on Graphs,https://proceedings.neurips.cc/paper/2020/file/73740ea85c4ec25f00f9acbd859f861d-Paper.pdf,"Graph-structured data are ubiquitous in real world, covering a variety of domains and applications such as social science, biology, medicine, and political science. In many domains such as biology and medicine, annotating a large number of labeled data could be extremely expensive and time consuming. Therefore, the algorithm proposed in this paper could help significantly reduce the labeling efforts in these domains — we can train systems on domains where labeled data are available, then transfer to those lower-resource domains. We believe such systems can help accelerating some research and develop processes that usually take a long time, in domains such as drug development. It can potentially also lower the cost for such research by reducing the need of expert-annotations. However, we also acknowledge potential social and ethical issues related to our work. 1. Our proposed system can effectively reduce the need of human annotations. However, in a broader point of view, this can potentially lead to a reduction of employment opportunities which may cause layoff to data annotators. 2. GNNs are widely used in domains related to critical needs such as healthcare and drug development. The community needs to be extra cautious and rigorous since any mistake may cause harm to patients. 3. Training the policy network for active learning on multiple graphs is relatively time - and computational resource - consuming. This line of research may produce more carbon footprint compared to some other work. Therefore, how to accelerate the training process by developing more efficient algorithms requires further investigation. Nonetheless, we believe that the directions of active learning and transfer learning provide a hopeful path towards our ultimate goal of data efficiency and interpretable machine learning.",mentions a harmful application,19
User-Dependent Neural Sequence Models for Continuous-Time Event Data,https://proceedings.neurips.cc/paper/2020/file/f56de5ef149cf0aedcc8f4797031e229-Paper.pdf,"While many of the successful and highly-visible applications of machine learning are in classification and regression, there are a broad range of applications that don’t naturally fit into these categories and that can potentially benefit significantly from machine learning approaches. In particular, in this paper we focus on continuous-time event data, which is very common in real-world applications but has not yet seen significant attention from the ML research community. There are multiple important problems in society where such data is common and that could benefit from the development of better predictive and simulation, including: • Education: Understanding of individual learning habits of students, especially in online educa- tional programs, could improve and allow for more personalized curricula. • Medicine: Customized tracking and predictions of medical events could save lives and improve patients’ quality of living. • Behavioral Models: Person-specific simulations of their behavior can lead to better systematic understandings of people’s social activities and actions in day-to-day lives. • Cybersecurity: Through the user identification capabilities, our work could aid in cyber-security applications for the purposes of identifying fraud detection and identify theft. Another potential positive broad impact of the work, is that by utilizing amortized VI, our methods do not require further costly training or fine-tuning to accommodate new users, which can potentially produce energy savings and lessen environmental impact in a production setting. On the other hand, as with many machine learning technologies, there is also always the potential for negative impact from a societal perspective. For example, more accurate individualized models for user-generated data could be used in a negative fashion for applications such as surveillance (e.g., to monitor and negatively impact individuals in protected groups). In addition, better predictions and recommendations for products and services, through explicitly conditioning on prior behavior from a user, could potentially further worsen existing privacy concerns.",mentions a harmful application,20
Self-paced Contrastive Learning with Hybrid Memory for Domain Adaptive Object Re-ID,https://proceedings.neurips.cc/paper/2020/file/821fa74b50ba3f7cba1e6c53e8fa6845-Paper.pdf,"Our method can help to identify and track different types of objects ( e . g ., vehicles, cyclists, pedestrians, etc . ) across different cameras (domains), thus boosting the development of smart retail, smart transportation, and smart security systems in the future metropolises. In addition, our proposed self-paced contrastive learning is quite general and not limited to the specific research field of object re-ID. It can be well extended to broader research areas, including unsupervised and semi-supervised representation learning. However, object re-ID systems, when applied to identify pedestrians and vehicles in surveillance systems, might give rise to the infringement of people’s privacy, since such re-ID systems often rely on non-consensual surveillance data for training, i . e ., it is unlikely that all human subjects even knew they were being recorded. Therefore, governments and officials need to carefully establish strict regulations and laws to control the usage of re-ID technologies. Otherwise, re-ID technologies can potentially equip malicious actors with the ability to surveil pedestrians or vehicles through multiple CCTV cameras without their consent. The research committee should also avoid using the datasets with ethics issues, e . g ., DukeMTMC [37], which has been taken down due to the violation of data collection terms, should no longer be used. We would not evaluate our method on DukeMTMC related benchmarks as well. Furthermore, we should be cautious of the misidentification of the re-ID systems to avoid possible disturbance. Also, note that the demographic makeup of the datasets used is not representative of the broader population.",mentions a harmful application,21
Real World Games Look Like Spinning Tops,https://proceedings.neurips.cc/paper/2020/file/ca172e964907a97d5ebd876bfdd4adbd-Paper.pdf,"This work focuses on better understanding of mathematical properties of real world games and how they could be used to understand successful AI techniques that were developed in the past. Since we focus on retrospective analysis of a mathematical phenomenon, on exposing an existing structure, and deepening our understanding of the world, we do not see any direct risks it entails. Introduced notions and insights could be used to build better, more engaging AI agents for people to play with in real world games (e.g. AIs that grow with the player, matching their strengths and weaknesses). In a broader spectrum, some of the insights could be used for designing and implementing new games, that humans would fine enjoyable though challenges they pose. In particular it could be a viewed as a model for measuring how much notion of progress the game consists of. However, we acknowledge that methods enabling improved analysis of games may be used for designing products with potentially negative consequences (e.g., games that are highly addictive) rather than positive (e.g., games that are enjoyable and mentally developing).",mentions a harmful application,22
Adapting Neural Architectures Between Domains,https://proceedings.neurips.cc/paper/2020/file/08f38e0434442128fab5ead6217ca759-Paper.pdf,This paper provides a novel perspective of cross-domain generalization in neural architecture search towards the efficient design of neural architectures with strong generalizability. This will lead to a better understanding of the generalizability of neural architectures. The proposed method will be used to design neural architectures for computer vision tasks with affordable computation cost.,doesn't mention a harmful application,23
Modeling Noisy Annotations for Crowd Counting,https://proceedings.neurips.cc/paper/2020/file/22bb543b251c39ccdad8063d486987bb-Paper.pdf,"In this paper, we introduce a novel loss function for counting crowd numbers by explicitly considering annotation noise. It can be applied to any density map based network architecture and improve the counting accuracy generally. The research is also helpful for monitoring the crowd number in public and prevent the accidents caused by overcrowding. It could also be used in retail businesses to estimate the occupancy of a store or area, which helps with personal and resource management. Our method could also be applied to other objects, such as cell counting, plant/animal counting, etc, and other research areas that use point-wise annotations, e.g., eye gaze estimation. Since the research is based on images captured by cameras, users may be concerned about the privacy problem. However, our method does not directly detect or track individuals, and thus this concern may be eased.",doesn't mention a harmful application,24
Byzantine Resilient Distributed Multi-Task Learning,https://proceedings.neurips.cc/paper/2020/file/d37eb50d868361ea729bb4147eb3c1d8-Paper.pdf,"The problem of Byzantine resilient aggregation of distributed machine learning models has been actively studied in recent years; however, the issue of Byzantine resilient distributed learning in multi-task networks has received much less attention. It is a general intuition that MTL is robust and resilient to cyber-attacks since it can identify attackers by measuring similarities between neighbors. In this paper, we have shown that some commonly used similarity measures are not resilient against certain attacks. With an increase in data heterogeneity, we hope this work could highlight the security and privacy concerns in designing distributed MTL frameworks.",doesn't mention a harmful application,25
From Predictions to Decisions: Using L kahead Regularization,https://proceedings.neurips.cc/paper/2020/file/2adcfc3929e7c03fac3100d3ad51da26-Paper.pdf,"In our work, the learning objective was designed to align with and support the possible use of a predictive model to drive decisions by users. It is our belief that a responsible and transparent deployment of models with “lookahead-like"" regularization components should avoid the kinds of mistakes that can be made when predictive methods are conflated with causally valid methods. At the same time, we have made a strong simplifying assumption, that of covariate shift, which requires that the relationship between covariates and outcome variables is invariant as decisions are made and the feature distribution changes. This strong assumption is made to ensure validity for the lookahead regularization, since we need to be able to perform inference about counterfactual observations. As discussed by Mueller et al. [ 31] and Peters et al. [34], there exist real-world tasks that reasonably satisfy this assumption, and yet at the same time, other tasks— notably those with unobserved confounders —where this assumption would be violated. Moreover, this assumption is not testable on the observational data. This, along with the need to make an assumption about the user decision model, means that an application of the method proposed here should be done with care and will require some domain knowledge to understand whether or not the assumptions are plausible. Furthermore, the validity of the interval estimates requires that any assumptions for the interval model used are satisfied and that weights w provide a reasonable estimation of p /p . In particular, fitting to p which has little to no overlap with p (see Figure 2) may result in underestimating the possibility of bad outcomes. If used carefully and successfully, then the system provides safety and protects against the misuse of a model. If used in a domain for which the assumptions fail to hold then the framework could make things worse, by trading accuracy for an incorrect view of user decisions and the effect of these decisions on outcomes. We would also caution against any specific interpretation of the application of the model to the wine and diabetes data sets. We note that model misspecification of f ∗ could result in arbitrarily bad outcomes, and estimating f ∗ in any high-stakes setting requires substantial domain knowledge and should err on the side of caution. We use the data sets for purely illustrative purposes because we believe the results are representative of the kinds of results that are available when the method is correctly applied to a domain of interest.",mentions a harmful application,26
Finite-Time Analysis of Round-Robin Kullback-Leibler Upper Confidence Bounds for Optimal Adaptive Allocation with Multiple Plays and Markovian Rewards,https://proceedings.neurips.cc/paper/2020/file/597c7b407a02cc0a92167e7a371eca25-Paper.pdf,"This work touches upon a very old problem dating back to 1933 and the work of [39]. Therefore, we don’t anticipate any new societal impacts or ethical aspects, that are not well understood by now.",doesn't mention a harmful application,27
Towards Interaction Detection Using Topological Analysis on Neural Networks,https://proceedings.neurips.cc/paper/2020/file/473803f0f2ebd77d83ee60daaa61f381-Paper.pdf,"The proposed PID algorithm can be applied in various fields because it provides knowledge about a domain. Any researcher who needs to design experiments might benefit from our proposed algorithm in the sense that it can help researchers formulate hypotheses that could lead to new data collection and experiments. For example, PID can help us discover the combined effects of drugs on human body: By utilizing PID on patients’ records, we might find using Phenelzine togther with Fluoxetine has a strong interaction effect towards serotonin syndrome. Thus, PID has great potential in helping the development of new therapies for saving lives. Also, this project will lead to effective and efficient algorithms for finding useful any-order crossing features in an automated way. Finding useful crossing features is one of the most crucial task in the Recommender Systems. Engineers and Scientists in E-commerce companies may benefit from our results that our algorithm can alleviate the human effect on finding these useful patterns in the data.",doesn't mention a harmful application,28
Why Normalizing Flows Fail to Detect Out-of-Distribution Data,https://proceedings.neurips.cc/paper/2020/file/ecb9fe2fbb99c31f567e9823e884dbec-Paper.pdf,"Out-of-distribution detection is crucial for robust, reliable and fair machine learning systems. Mitchell et al. [27] and Gebru et al. [13] argue that applying machine learning models outside of the context where they were trained and tested can lead to dangerous and discriminatory outcomes in high-stake domains. We hope that our work will generally contribute to the understanding of out-of-distribution detection and facilitate methodological progress in this area.",doesn't mention a harmful application,29
AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning,https://proceedings.neurips.cc/paper/2020/file/634841a6831464b64c072c8510c7f35c-Paper.pdf,"Our research improves the capacity of deep neural networks to solve many tasks at once in a more efficient manner. It enables the use of smaller networks to support more tasks, while performing knowledge transfer between related tasks to improve their accuracy. For example, we showed that our proposed approach can solve five computer vision tasks (semantic segmentation, surface normal prediction, depth prediction, keypoint detection and edge estimation) with 80% fewer parameters while achieving the same performance as the standard approach. Our approach can thus have a positive impact on applications that require multiple tasks such as computer vision for robotics. Potential applications could be in assistive robots, autonomous navigation, robotic picking and packaging, rescue and emergency robotics and AR/VR systems. Our research can reduce the memory and power consumption of such systems and enable them to be deployed for longer periods of time and become smaller and more agile. The lessened power consumption could have a high impact on the environment as AI systems become more prevalent. Negative impacts of our research are difficult to predict, however, it shares many of the pitfalls associated with deep learning models. These include susceptibility to adversarial attacks and data poisoning, dataset bias, and lack of interpretablity. Other risks associated with deployment of computer vision systems include privacy violations when images are captured without consent, or used to track individuals for profit, or increased automation resulting in job losses. While we believe that these issues should be mitigated, they are beyond the scope of this paper. Furthermore, we should be cautious of the result of failure of the system which could impact the performance/user experience of the high-level AI systems relied on our research.",mentions a harmful application,30
AOT: Appearance Optimal Transport Based Identity Swapping for Forgery Detection,https://proceedings.neurips.cc/paper/2020/file/f718499c1c8cef6730f9fd03c8125cab-Paper.pdf,"Deepfake refers to synthesized media in which a portrait of a person in real media is replaced by that of someone else. Deepfakes have been widely applied in the digital entertainment industry, but they also present potential threats to the public. Identity swapping is an approach to produce Deepfakes and is also the research direction of this paper. Given the sensitivity of Deepfakes and their potential negative impacts, we further discuss the potential threats and the corresponding mitigation solutions with respect to our work.",mentions a harmful application,31
Permute-and-Flip: A new mechanism for differentially private selection,https://proceedings.neurips.cc/paper/2020/file/01e00f2f4bfcbb7505cb641066f2859b-Paper.pdf,"Our work fi ts in the established research area of differential privacy, which enables the positive societal bene fi ts of gleaning insight and utility from data sets about people while offering formal guarantees of privacy to individuals who contribute data. While these bene fi ts are largely positive, unintended harms could arise due to misapplication of differential privacy or misconceptions about its guarantees. Additionally, dif fi cult social choices are faced when deciding how to balance privacy and utility. Our work addresses a foundational differential privacy task and enables better utility-privacy tradeoffs within this broader context.",mentions a harmful application,32
Classification with Valid and Adaptive Coverage,https://proceedings.neurips.cc/paper/2020/file/244edd7e85dc81602b7615cd705545f5-Paper.pdf,"Machine learning algorithms are increasingly relied upon by decision makers. It is therefore crucial to combine the predictive performance of such complex machinery with practical guarantees on the reliability and uncertainty of their output. We view the calibration methods presented in this paper as an important step towards this goal. In fact, uncertainty estimation is an effective way to quantify and communicate the benefits and limitations of machine learning. Moreover, the proposed methodologies provide an attractive way to move beyond the standard prediction accuracy measure used to compare algorithms. For instance, one can compare the performance of two candidate predictors, e.g., random forest and neural network (see Figure 3), by looking at the size of the corresponding prediction sets and/or their their conditional coverage. Finally, the approximate conditional coverage that we seek in this work is highly relevant within the broader framework of fairness, as discussed by [17] within a regression setting. While our approximate conditional coverage already implicitly reduces the risk of unwanted bias, an equalized coverage requirement [17] can also be easily incorporated into our methods to explicitly avoid discrimination based on protected categories. We conclude by emphasizing that the validity of our methods relies on the exchangeability of the data points. If this assumption is violated (e.g., with time-series data), our prediction sets may not have the right coverage. A general suggestion here is to always try to leverage specific knowledge of the data and of the application domain to judge whether the exchangeability assumption is reasonable. Finally, our data-splitting techniques in Section 4 offer a practical way to verify empirically the validity of the predictions on any given data set.",doesn't mention a harmful application,33
Learning Kernel Tests Without Data Splitting,https://proceedings.neurips.cc/paper/2020/file/44f683a84163b3523afe57c2e008bc8c-Paper.pdf,"Hypothesis testing and valid inference after model selection are fundamental problems in statistics, which have recently attracted increasing attention also in machine learning. Kernel tests such as MMD are not only used for statistical testing, but also to design algorithms for deep learning and GANs [41, 42]. The question of how to select the test statistic naturally arises in kernel-based tests because of the kernel choice problem. Our work shows that it is possible to overcome the need of (wasteful and often heuristic) data splitting when designing hypothesis tests with feasible null distribution. Since this comes without relevant increase in computational resources we expect the proposed method to replace the data splitting approach in applications that fit the framework considered in this work. Theorem 1 is also applicable beyond hypothesis testing and extends the previously known PSI framework proposed by Lee et al. [24].",doesn't mention a harmful application,34
Passport-aware Normalization for Deep Model Protection,https://proceedings.neurips.cc/paper/2020/file/ff1418e8cc993fe8abcfe3ce2003e5c5-Paper.pdf,"Though deep learning evolves very fast in these years, IP protection for deep models is seriously under-researched. In this work, we mainly aim to propose a general technique for deep model IP protection. It will help both academia and industry to protect their interests from illegal distribution or usage. We hope it can inspire more works along this important direction.",doesn't mention a harmful application,35
Group Knowledge Transfer: Federated Learning of Large CNNs at the Edge,https://proceedings.neurips.cc/paper/2020/file/a1d4c20b182ad7137ab3606f0e3fc8a4-Paper.pdf,"FedGKT can efficiently train large deep neural networks (CNNs) in resource-constrained edge devices (such as smartphones, IoT devices, and edge servers). Unlike past FL approaches, FedGKT demonstrates the feasibility of training a large server-side model by using many small client models. FedGKT preserves the data privacy requirements of the FL approach but also works within the constraints of an edge computing environment. Smartphone users may benefit from this technique because their private data is protected, and they may also simultaneously obtain a high-quality model service. Organizations such as hospitals, and other non-profit entities with limited training resources, can collaboratively train a large CNN model without revealing their datasets while achieving significant training cost savings. They can also meet requirements regarding the protection of intellectual property, confidentiality, regulatory restrictions, and legal constraints. As for the potential risks of our method, a client can maliciously send incorrect hidden feature maps and soft labels to the server, which may potentially impact the overall model accuracy. These effects must be detected and addressed to maintain overall system stability. Second, the relative benefits for each client may vary. For instance, in terms of fairness, edge nodes which have smaller datasets may obtain more model accuracy improvement from collaborative training than those which have a larger amount of training data. Our training framework does not consider how to balance this interest of different parties.",mentions a harmful application,36
Improving Local Identifiability in Probabilistic Box Embeddings,https://proceedings.neurips.cc/paper/2020/file/01c9d2c5b3ff5cbba349ec39a570b5e3-Paper.pdf,This work does not present any foreseeable societal consequence.,doesn't mention a harmful application,37
A Finite-Time Analysis of Two Time-Scale Actor-Critic Methods,https://proceedings.neurips.cc/paper/2020/file/cc9b3c69b56df284846bf2432f1cba90-Paper.pdf,"This work could positively impact the industrial application of actor-critic algorithms and other reinforcement learning algorithms. The theorem exhibits the sample complexity of actor-critic algorithms, which could be used to estimate required training time of reinforcement learning models. Another direct application of our result is to set the learning rate according to the finite-time bound, by optimizing the constant factors of the dominant terms. In this sense, the result could potentially reduce the overhead of hyper-parameter tuning, thus saving both human and computational resources. Moreover, the new analysis in this paper can potentially help people in different fields to understand the broader class of two-time scale algorithms, in addition to actor-critic methods. To our knowledge, this algorithm and theory studied in our paper do not have any ethical issues.",doesn't mention a harmful application,38
Active Invariant Causal Prediction: Experiment Selection through Stability,https://proceedings.neurips.cc/paper/2020/file/b197ffdef2ddc3308584dce7afa3661b-Paper.pdf,"Any method that learns from finite data is subject to statistical estimation errors and model assumptions that necessarily limit the full applicability of its findings. Unfortunately, study outcomes are not always communicated with the required qualifications. As an example, statistical hypothesis testing is often employed carelessly, e.g. by using p-values to claim “statistical significance” without paying attention to the underlying assumptions [5]. There is a danger that this problem gets exacerbated when one aims to estimate causal structures. Estimates from causal inference algorithms could be claimed to “prove” a given causal relationship, ruling out various alternative explanations that one would consider when explaining a statistical association. For example, ethnicity could be claimed to have a causal effect on criminality and thereby used as a justification for oppressive political measures. While this would represent a clear abuse of the technology, we as researchers have to ensure that similar mistakes in interpretation are not made unintentionally. This implies being conscientious about understanding as well as stating the limitations of our research. While there is a risk that causal inference methods are misused as described above, there is of course also an array of settings where causal learning—and in particular active causal learning—can be extremely useful. As our main motivation we envision the empirical sciences where interventions correspond to physical experiments which can be extremely costly in terms of time and/or money. For complex systems, as for example gene regulatory networks in biology, it might be difficult for human scientists to choose informative experiments, particularly if they are forced to rely on data alone. Our goal is to develop methods to aid scientists to better understand their data and perform more effective experiments, resulting in significant resource savings. The specific impact of our proposed methodology will depend on the application. For the method we propose in this work, one requirement for application would be that the experiments yield more than one data point (and ideally many), so that our invariance-based approach can be employed. In future work, we aim to develop methodology that is geared towards the setting where only very few data points per experiment are available.",doesn't mention a harmful application,39
Continuous Meta-Learning without Tasks,https://proceedings.neurips.cc/paper/2020/file/cc3f5463bc4d26bc38eadc8bcffbc654-Paper.pdf,"Our work provides a method to extend meta-learning algorithms beyond the task-segmented case, to the time series series domain. Equivalently, our work extends core methods in changepoint detection, enabling the use of highly expressive predictive models via empirical Bayes. This work has the potential to extend the domain of applicability of both of these methods. Standard meta-learning relies on a collection of datasets, each corresponding to discrete tasks. A natural question is how such datasets are constructed; in many cases, these datasets rely on segmentation of time series data by experts. Thus, our work has the potential to make meta-learning algorithms applicable to problems that, previously, would have been too expensive or impossible to segment. Moreover, our work has the potential to improve the applicability of changepoint detection methods to difficult time series forecasting problems. While MOCA has the potential to expand the domain of problems addressable via meta-learning, this has the effect of amplifying the risks associated with these methods. Meta-learning enables efficient learning for individual members of a population via leveraging empirical priors. There are clear risks in few-shot learning generally: for example, efficient facial recognition from a handful of images has clear negative implications for privacy. Moreover, while there is promising initial work on fairness for meta-learning [39], we believe considerable future research is required to understand the degree to which meta-learning algorithms increase undesirable bias or decrease fairness. While it is plausible that fine-tuning to the individual results in reduced bias, there are potential unforeseen risks associated with the adaptation process, and future research should address how bias is potentially introduced in this process. Relative to decision making rules that are fixed across a population, algorithms which fine-tune decision making to the individual present unique challenges in analyzing fairness. Further research is required to ensure that the adaptive learning enabled by algorithms such as MOCA do not lead to unfair outcomes.",mentions a harmful application,40
Learning Rich Rankings,https://proceedings.neurips.cc/paper/2020/file/6affee954d76859baa2800e1c49e2c5d-Paper.pdf,"Flexible ranking distributions that can be learned with provable guarantees can facilitate more powerful and reliable ranking algorithms inside recommender systems, search engines, and other ranking-based technological products. As a potential adverse consequence, more powerful and reliable learning algorithms can lead to an increased inappropriate reliance on technological solutions to complex problems, where practitioners may be not fully grasp the limitations of our work, e.g. independence assumptions, or that our risk bounds, as established here, do not hold for all data generating processes.",mentions a harmful application,41
Reinforcement Learning for Control with Multiple Frequencies,https://proceedings.neurips.cc/paper/2020/file/216f44e2d28d4e175a194492bde9148f-Paper.pdf,"In recent years, reinforcement learning (RL) has shown remarkable successes in various areas, where most of their results are based on the assumption that all decision variables are simultaneously determined at every discrete time step. However, many real-world sequential decision-making problems involve multiple decision variables whose control frequencies are different by the domain requirement. In this situation, standard RL algorithms without considering the control frequency requirement may suffer from severe performance degradation as discussed in Section 3. This paper provides a theoretical and algorithmic foundation of how to address multiple control frequencies in RL, which enables RL to be applied to more complex and diverse real-world problems that involve decision variables with different frequencies. Therefore, this work would be beneficial for those who want to apply RL to various tasks that inherently have multiple control frequencies. As we provide a general-purpose methodology, we believe this work has little to do with a particular system failure or a particular data bias. On the other hand, this work could contribute to accelerating industrial adoption of RL, which has the potential to adversely affect employment due to automation.",mentions a harmful application,42
Latent Dynamic Factor Analysis of High-Dimensional Neural Recordings,https://proceedings.neurips.cc/paper/2020/file/beb04c41b45927cf7e9f8fd4bb519e86-Paper.pdf,"While progress in understanding the brain is improving life through research, especially in mental health and addiction, in no case is any brain disorder well understood mechanistically. Faced with the reality that each promising discovery inevitably reveals new subtleties, one reasonable goal is to be able to change behavior in desirable ways by modifying specific brain circuits and, in animals, technologies exist for circuit disruptions that are precise in both space and time. However, to determine the best location and time for such disruptions to occur, with minimal off-target effects, will require far greater knowledge of circuits than currently exists: we need good characterizations of interactions among brain regions, including their timing relative to behavior. The over-arching aim of our research is to provide methods for describing the flow of information, based on evolving neural activity, among multiple regions of the brain during behavioral tasks. Such methods can lead to major advances in experimental design and, ultimately, to far better treatments than currently exist.",doesn't mention a harmful application,43
Reducing Adversarially Robust Learning to Non-Robust PAC Learning,https://proceedings.neurips.cc/paper/2020/file/a822554e5403b1d370db84cfbc530503-Paper.pdf,"Learning predictors that are robust to adversarial perturbations is an important challenge in contem- porary machine learning. Current machine learning systems have been shown to be brittle against different notions of robustness such as adversarial perturbations [Szegedy et al., 2013, Biggio et al., 2013, Goodfellow et al., 2014], and there is an ongoing effort to devise methods for learning predictors that are adversarially robust. As machine learning systems become increasingly integrated into our everyday lives, it becomes crucial to provide guarantees about their performance, even when they are used outside their intended conditions. We already have many tools developed for standard learning, and having a universal wrapper that can take any standard learning method and turn it into a robust learning method could greatly simplify the development and deployment of learning that is robust to test-time adversarial perturbations. The results that we present in this paper are still mostly theoretical, and limited to the realizable setting, but we expect and hope they will lead to further theoretical study as well as practical methodological development with direct impact on applications. In this work we do not deal with training-time adversarial attacks, which is a major, though very different, concern in many cases. As with any technology, having a more robust technology can have positive and negative societal consequences, and this depends mainly on how such technology is utilized. Our intent from this research is to help with the design of robust machine learning systems for application domains such as healthcare and transportation where its critical to ensure performance guarantees even outside intended conditions. In situations where there is a tradeoff between robustness and accuracy, this work might be harmful in that it would prioritize robustness over accuracy and this may not be ideal in some application domains.",mentions a harmful application,44
Online Non-Convex Optimization with Imperfect Feedback,https://proceedings.neurips.cc/paper/2020/file/c7c46d4baf816bfb07c7f3bf96d88544-Paper.pdf,This is a theoretical work which does not present any foreseeable societal consequence.,doesn't mention a harmful application,45
Digraph Inception Convolutional Networks,https://proceedings.neurips.cc/paper/2020/file/cffb6e2288a630c2a787a64ccc67097c-Paper.pdf,"GCNs could be applied to a wide range of applications, including image segmentation [27], speech recognition [14], recommender system [17], point cloud [50, 24], traffic prediction [25] and many more [45]. Our method can help to expand the graph types from undirected to directed in the above application scenarios and obtain multi-scale features from the high-order hidden directed structure. For traffic prediction, our method can be used in map applications to obtain more fine-grained and accurate predictions. This requires users to provide location information, which has a risk of privacy leakage. The same concerns also arise in social network analysis [38], person re-ID [35] and NLP [49], which use graph convolutional networks as their feature extraction methods. Another potential risk is that our model may be adversarial attacked by adding new nodes or deleting existing edges. For example, in a graph-based recommender system, our model may produce completely different recommendation results due to being attacked. We see opportunities for research applying DiGCN to beneficial purposes, such as investigating the ability of DiGCN to discover hidden complex directed structure, the limitation of approximate method based on personalized PageRank and the feature oversmoothing problem in digraphs. We also encourage follow-up research to design derivative methods for different tasks based on our method.",mentions a harmful application,46
Learning Physical Constraints with Neural Projections,https://proceedings.neurips.cc/paper/2020/file/37bc5e7fb6931a50b3464ec66179085f-Paper.pdf,This research constitutes a technical advance by employing constraint projection operations to enhance the prediction capability of physical systems with unknown dynamics. It opens up new possibilities to effectively and intuitively represent complicated physical systems from direct and limited observation. This research blend the borders among the communities of machine learning and fast physics simulations in computer graphics and gaming industry. Our model does not necessarily bring about any significant ethical considerations.,doesn't mention a harmful application,47
Sub-sampling for Efficient Non-Parametric Bandit Exploration,https://proceedings.neurips.cc/paper/2020/file/3ab6be46e1d6b21d59a3c3a0b9d0f6ef-Paper.pdf,"This work is advertising a new way to do non-parametric exploration in bandit models, that enjoy good empirical performance and strong theoretical guarantees. First, bandit problems are at the heart of numerous applications to online content recommendation, hence the good performance of SDA algorithms may inspire new algorithms for more realistic models used for these applications, such as contextual bandits. Then, exploration is a central question in the broader field of reinforcement learning, hence new ideas for bandits may lead to new ideas for reinforcement learning.",doesn't mention a harmful application,48
The Discrete Gaussian for Differential Privacy,https://proceedings.neurips.cc/paper/2020/file/b53b3a3d6ab90ce0268229151c9bde11-Paper.pdf,"We have provided a thorough analysis of the privacy and utility properties of the discrete Gaussian and the practicality of sampling it. The impact of this work is that it makes the real-world deployment of differential privacy more practical and secure. In particular, we bridge the gap between the theory (which considers continuous distributions) and the practice (where precision is finite and numerical errors can cause a dramatic privacy failures). We hope that the discrete Gaussian will be used in practice and, further, that our work is critical to enabling these real-world deployments. The positive impact of this work is clear: Differential privacy provides a principled and quantitative way to balance rigorous privacy guarantees and statistical utility in data analysis. If this technology is adopted, it can provide untrusted third parties controlled access to data (e.g., to enable scientific research), while affording the data subjects (i.e., the general public) an adequate level of privacy protection. In any case, our methods are better than using flawed methods (i.e., naïve floating-point implementations of continuous distributions) that inject noise without actually protecting privacy or using methods (such as rounding or discrete Laplace) that offer a worse privacy-utility tradeoff. The negative impact of this work is less clear. All technologies can be misused. For example, an organization may be able to deceptively claim that their system protects privacy on the basis that it is differentially private, when, in reality, it is not private at all, because their privacy parameter is enormous (e.g., ε = 10 6 ). One needs to be careful and critical about promises made by such companies, and educate the general audience about what differential privacy does provide, what it does not, and when guarantees end up being meaningless. However, we must acknowledge that there is a small – but vocal – group of people who do not want differential privacy to be deployed in practice. In particular, the US Census Bureau’s planned adoption of differential privacy for the 2020 US Census has met staunch opposition from some social scientists. We cannot speak for the opponents of differential privacy; many of their objections do not make sense to us and thus it would be inappropriate for us to try summarizing them. However, there is a salient point that needs to be discussed: Differential privacy provides a principled and quantitative way to balance rigorous privacy guarantees and statistical utility in data analysis. This is good, in theory, but, in practice, privacy versus utility is a heated and muddy debate. On one hand, data users (such as social scientists) want unfettered access to the raw data. On the other hand, privacy advocates want the data locked up or never collected in the first place. The technology of differential privacy offers a vehicle for compromise. Yet, some parties are not interested in compromise. In particular, users of census data users are accustomed to largely unrestricted data access. From a privacy perspective, this is unsustainable – the development of reconstruction attacks and the availability of large auxiliary datasets for linking/re-identification has shown that census data needs more robust protections. Understandably, those who rely on census data are deeply concerned about anything that may compromise their ability to conduct research. The adoption of differential privacy has prompted uncomfortable (but necessary) discussions about the value of providing data access relative to the privacy cost. In particular, it is necessary to decide how to allocate the privacy budget – which statistics are most important to release accurately? Another dimension of the privacy-versus-utility debate is how it affects small communities, such as racial/ethnic minorities or rural populations. Smaller populations inherently suffer a harsher privacy- utility tradeoff. Differential privacy is almost always defined so that it provides every person with an equal level of privacy. Consequently, differentially private statistics for smaller populations (e.g., Native Americans in a small settlement) will be less accurate than for larger populations (e.g., Whites in a large US city). More precisely, noise addition methods like ours offer the same absolute accuracy on all populations, but the relative accuracy will be worse when the denominator (i.e., population size) is smaller. The only alternative is to offer small communities weaker privacy protections. We stress that this issue is not specific to differential privacy. For example, if we rely on anonymity or de-identification, then we must grapple with the fact that minorities are more easily re-identified, since, by definition, minorities are more unique. This is a fundamental tradeoff that needs to be carefully considered with input from the minorities and communities concerned. Ultimately, computer scientists can only provide tools and it is up to policymakers in government and other organizations to decide how to use them. This work, along with the broader literature on differential privacy, provides such tools. However, the research community also has a responsibility to provide instructions for how these tools should and should not be used.",mentions a harmful application,49
|