Datasets:
Tasks:
Text Classification
Modalities:
Text
Sub-tasks:
multi-class-classification
Languages:
English
Size:
10K - 100K
ArXiv:
License:
Title,Abstract Note,Url,Publication Year,Item Type,Author,Publication Title,Label | |
Malign generalization without internal search,"In my last post, I challenged the idea that inner alignment failures should be explained by appealing to agents which perform explicit internal search. By doing so, I argued that we should instead appeal to the more general concept of malign generalization, and treat mesa-misalignment as a special case. Unfortunately, the post was light on examples of what we should be worrying about instead of mesa-misalignment. Evan Hubinger wrote, Personally, I think there is a meaningful sense in which all the models I'm most worried about do some sort of search internally (at least to the same extent that humans do search internally), but I'm definitely uncertain about that.Wei Dai expressed confusion why I would want to retreat to malign generalization without some sort of concrete failure mode in mind, Can you give some realistic examples/scenarios of “malign generalization” that does not involve mesa optimization? I’m not sure what kind of thing you’re actually worried about here.In this post, I will outline a general category of agents which may exhibit malign generalization without internal search, and then will provide a concrete example of an agent in the category. Then I will argue that, rather than being a very narrow counterexample, this class of agents could be competitive with search-based agents. THE SWITCH CASE AGENT Consider an agent governed by the following general behavior, LOOP:State = GetStateOfWorld(Observation)IF State == 1:PerformActionSequence1() IF State == 2:PerformActionSequence2()...END_LOOP It's clear that this agent does not perform any internal search for strategies: it doesn't operate by choosing actions which rank highly according to some sort of internal objective function. While you could potentially rationalize its behavior according to some observed-utility function, this would generally lead to more confusion than clarity. However, this agent could still be malign in the following way. Suppose the agent is 'mistaken' about the s",https://www.alignmentforum.org/posts/ynt9TD6PrYw6iT49m/malign-generalization-without-internal-search,2020,blogPost,"Barnett, Matthew",AI Alignment Forum,TAI safety research | |
Utility Indifference,"Consider an AI that follows its own motivations. We’re not entirely sure what its motivations are, but we would prefer that the AI cooperate with humanity; or, failing that, that we can destroy it before it defects. We’ll have someone sitting in a room, their finger on a detonator, ready at the slightest hint of defection. Unfortunately as has been noted ([3], [1]), this does not preclude the AI from misbehaving. It just means that the AI must act to take control of the explosives, the detonators or the human who will press the button. For a superlatively intelligence AI, this would represent merely a slight extra difficulty. But now imagine that the AI was somehow indifferent to the explosives going off or not (but that nothing else was changed). Then if ever the AI does decide to defect, it will most likely do so without taking control of the explosives, as that would be easier than otherwise. By “easier ” we mean that the chances of failure are less, since the plan is simpler – recall that under these assumptions, the AI counts getting blown up as an equal value to successfully defecting.",,2010,report,"Armstrong, Stuart",,TAI safety research | |
Improving Sample Efficiency in Model-Free Reinforcement Learning from Images,"Training an agent to solve control tasks directly from high-dimensional images with model-free reinforcement learning (RL) has proven difficult. A promising approach is to learn a latent representation together with the control policy. However, fitting a high-capacity encoder using a scarce reward signal is sample inefficient and leads to poor performance. Prior work has shown that auxiliary losses, such as image reconstruction, can aid efficient representation learning. However, incorporating reconstruction loss into an off-policy learning algorithm often leads to training instability. We explore the underlying reasons and identify variational autoencoders, used by previous investigations, as the cause of the divergence. Following these findings, we propose effective techniques to improve training stability. This results in a simple approach capable of matching state-of-the-art model-free and model-based algorithms on MuJoCo control tasks. Furthermore, our approach demonstrates robustness to observational noise, surpassing existing approaches in this setting. Code, results, and videos are anonymously available at https://sites.google.com/view/sac-ae/home.",http://arxiv.org/abs/1910.01741,2020,manuscript,"Yarats, Denis; Zhang, Amy; Kostrikov, Ilya; Amos, Brandon; Pineau, Joelle; Fergus, Rob",,not TAI safety research | |
Teaching A.I. Systems to Behave Themselves (Published 2017),"As philosophers and pundits worry that artificial intelligence will one day harm the world, some researchers are working on ways to lower the risks.",https://www.nytimes.com/2017/08/13/technology/artificial-intelligence-safety-training.html,2017,newspaperArticle,"Metz, Cade",The New York Times,not TAI safety research | |
Incentives in Teams,,https://www.jstor.org/stable/1914085?origin=crossref,1973,journalArticle,"Groves, Theodore",Econometrica,not TAI safety research | |
A bargaining-theoretic approach to moral uncertainty,"This paper explores a new approach to the problem of decision under relevant moral uncertainty. We treat the case of an agent making decisions in the face of moral uncertainty on the model of bargaining theory, as if the decision-making process were one of bargaining among different internal parts of the agent, with different parts committed to different moral theories. The resulting approach contrasts interestingly with the extant “maximise expected choiceworthiness” and “my favourite theory” approaches, in several key respects. In particular, it seems somewhat less prone than the MEC approach to ‘fanaticism’: allowing decisions to be dictated by a theory in which the agent has extremely low credence, if the relative stakes are high enough. Overall, however, we tentatively conclude that the MEC approach is superior to a bargaining-theoretic approach.",,2019,report,"Greaves, Hilary; Cotton-Barratt, Owen",,not TAI safety research | |
The Timing of Evolutionary Transitions Suggests Intelligent Life Is Rare,"It is unknown how abundant extraterrestrial life is, or whether such life might be complex or intelligent. On Earth, the emergence of complex intelligent life required a preceding series of evolutionary transitions such as abiogenesis, eukaryogenesis, and the evolution of sexual reproduction, multicellularity, and intelligence itself. Some of these transitions could have been extraordinarily improbable, even in conducive environments. The emergence of intelligent life late in Earth's lifetime is thought to be evidence for a handful of rare evolutionary transitions, but the timing of other evolutionary transitions in the fossil record is yet to be analyzed in a similar framework. Using a simplified Bayesian model that combines uninformative priors and the timing of evolutionary transitions, we demonstrate that expected evolutionary transition times likely exceed the lifetime of Earth, perhaps by many orders of magnitude. Our results corroborate the original argument suggested by Brandon Carter that intelligent life in the Universe is exceptionally rare, assuming that intelligent life elsewhere requires analogous evolutionary transitions. Arriving at the opposite conclusion would require exceptionally conservative priors, evidence for much earlier transitions, multiple instances of transitions, or an alternative model that can explain why evolutionary transitions took hundreds of millions of years without appealing to rare chance events. Although the model is simple, it provides an initial basis for evaluating how varying biological assumptions and fossil record data impact the probability of evolving intelligent life, and also provides a number of testable predictions, such as that some biological paradoxes will remain unresolved and that planets orbiting M dwarf stars are uninhabitable.",https://www.liebertpub.com/doi/full/10.1089/ast.2019.2149,2020,journalArticle,"Snyder-Beattie, Andrew E.; Sandberg, Anders; Drexler, K. Eric; Bonsall, Michael B.",Astrobiology,not TAI safety research | |
Changing Identity: Retiring from Unemployment,,https://academic.oup.com/ej/article/124/575/149-166/5076984,2014,journalArticle,"Hetschko, Clemens; Knabe, Andreas; Schöb, Ronnie",The Economic Journal,not TAI safety research | |
Model-Based Reinforcement Learning via Meta-Policy Optimization,"Model-based reinforcement learning approaches carry the promise of being data efficient. However, due to challenges in learning dynamics models that sufficiently match the real-world dynamics, they struggle to achieve the same asymptotic performance as model-free methods. We propose Model-Based Meta-Policy-Optimization (MB-MPO), an approach that foregoes the strong reliance on accurate learned dynamics models. Using an ensemble of learned dynamic models, MB-MPO meta-learns a policy that can quickly adapt to any model in the ensemble with one policy gradient step. This steers the meta-policy towards internalizing consistent dynamics predictions among the ensemble while shifting the burden of behaving optimally w.r.t. the model discrepancies towards the adaptation step. Our experiments show that MB-MPO is more robust to model imperfections than previous model-based approaches. Finally, we demonstrate that our approach is able to match the asymptotic performance of model-free methods while requiring significantly less experience.",http://arxiv.org/abs/1809.05214,2018,manuscript,"Clavera, Ignasi; Rothfuss, Jonas; Schulman, John; Fujita, Yasuhiro; Asfour, Tamim; Abbeel, Pieter",,not TAI safety research | |
Advancing rational analysis to the algorithmic level,"Abstract The commentaries raised questions about normativity, human rationality, cognitive architectures, cognitive constraints, and the scope or resource rational analysis (RRA). We respond to these questions and clarify that RRA is a methodological advance that extends the scope of rational modeling to understanding cognitive processes, why they differ between people, why they change over time, and how they could be improved.",https://www.cambridge.org/core/product/identifier/S0140525X19002012/type/journal_article,2020,journalArticle,"Lieder, Falk; Griffiths, Thomas L.",Behavioral and Brain Sciences,not TAI safety research | |
Confronting future catastrophic threats to humanity,,https://linkinghub.elsevier.com/retrieve/pii/S0016328715001135,2015,journalArticle,"Baum, Seth D.; Tonn, Bruce E.",Futures,TAI safety research | |
Latent Variables and Model Mis-Specification,"Posted as part of the AI Alignment Forum sequence on Value Learning. Rohin's note: So far, we’ve seen that ambitious value learning needs to understand human biases, and that we can't simply learn the biases in tandem with the reward. Perhaps we could hardcode a specific model of human biases? Such a model is likely to be incomplete and inaccurate, but it will perform better than assuming an optimal human, and as we notice failure modes we can improve the model. In the language of this post by Jacob Steinhardt (original here), we are using a mis-specified human model. The post talks about why model mis-specification is worse than it may seem at first glance. This post is fairly technical and may not be accessible if you don’t have a background in machine learning. If so, you can skip this post and still understand the rest of the posts in the sequence. However, if you want to do ML-related safety research, I strongly recommend putting in the effort to understand the problems that can arise with mis-specification. -------------------------------------------------------------------------------- Machine learning is very good at optimizing predictions to match an observed signal — for instance, given a dataset of input images and labels of the images (e.g. dog, cat, etc.), machine learning is very good at correctly predicting the label of a new image. However, performance can quickly break down as soon as we care about criteria other than predicting observables. There are several cases where we might care about such criteria: * In scientific investigations, we often care less about predicting a specific observable phenomenon, and more about what that phenomenon implies about an underlying scientific theory. * In economic analysis, we are most interested in what policies will lead to desirable outcomes. This requires predicting what would counterfactually happen if we were to enact the policy, which we (usually) don’t have any data about. * In ma",https://www.alignmentforum.org/posts/gnvrixhDfG7S2TpNL/latent-variables-and-model-mis-specification,2018,blogPost,"Steinhardt, Jacob",AI Alignment Forum,TAI safety research | |
Economics of the singularity,,http://ieeexplore.ieee.org/document/4531461/,2008,journalArticle,"Hanson, Robin",IEEE Spectrum,TAI safety research | |
Penalizing side effects using stepwise relative reachability,"How can we design safe reinforcement learning agents that avoid unnecessary disruptions to their environment? We show that current approaches to penalizing side effects can introduce bad incentives, e.g. to prevent any irreversible changes in the environment, including the actions of other agents. To isolate the source of such undesirable incentives, we break down side effects penalties into two components: a baseline state and a measure of deviation from this baseline state. We argue that some of these incentives arise from the choice of baseline, and others arise from the choice of deviation measure. We introduce a new variant of the stepwise inaction baseline and a new deviation measure based on relative reachability of states. The combination of these design choices avoids the given undesirable incentives, while simpler baselines and the unreachability measure fail. We demonstrate this empirically by comparing different combinations of baseline and deviation measure choices on a set of gridworld experiments designed to illustrate possible bad incentives.",http://arxiv.org/abs/1806.01186,2019,conferencePaper,"Krakovna, Victoria; Orseau, Laurent; Kumar, Ramana; Martic, Miljan; Legg, Shane",Proceedings of the Workshop on Artificial Intelligence Safety 2019,TAI safety research | |
“Explaining” machine learning reveals policy challenges,,https://www.sciencemag.org/lookup/doi/10.1126/science.aba9647,2020,journalArticle,"Coyle, Diane; Weller, Adrian",Science,TAI safety research | |
How unlikely is a doomsday catastrophe?,"Numerous Earth-destroying doomsday scenarios have recently been analyzed, including breakdown of a metastable vacuum state and planetary destruction triggered by a ""strangelet'' or microscopic black hole. We point out that many previous bounds on their frequency give a false sense of security: one cannot infer that such events are rare from the the fact that Earth has survived for so long, because observers are by definition in places lucky enough to have avoided destruction. We derive a new upper bound of one per 10^9 years (99.9% c.l.) on the exogenous terminal catastrophe rate that is free of such selection bias, using planetary age distributions and the relatively late formation time of Earth.",https://arxiv.org/abs/astro-ph/0512204v2,2005,manuscript,"Tegmark, Max; Bostrom, Nick",,TAI safety research | |
A new model and dataset for long-range memory,"This blog introduces a new long-range memory model, the Compressive Transformer, alongside a new benchmark for book-level language modelling, PG19. We provide the conceptual tools needed to understand this new research in the context of recent developments in memory models and language modelling.",deepmind.com/blog/article/A_new_model_and_dataset_for_long-range_memory,2020,blogPost,"Rae, Jack; Lillicrap, Timothy",Deepmind,not TAI safety research | |
Safe Imitation Learning via Fast Bayesian Reward Inference from Preferences,"Bayesian reward learning from demonstrations enables rigorous safety and uncertainty analysis when performing imitation learning. However, Bayesian reward learning methods are typically computationally intractable for complex control problems. We propose Bayesian Reward Extrapolation (Bayesian REX), a highly efficient Bayesian reward learning algorithm that scales to high-dimensional imitation learning problems by pre-training a low-dimensional feature encoding via self-supervised tasks and then leveraging preferences over demonstrations to perform fast Bayesian inference. Bayesian REX can learn to play Atari games from demonstrations, without access to the game score and can generate 100,000 samples from the posterior over reward functions in only 5 minutes on a personal laptop. Bayesian REX also results in imitation learning performance that is competitive with or better than stateof-the-art methods that only learn point estimates of the reward function. Finally, Bayesian REX enables efficient high-confidence policy evaluation without having access to samples of the reward function. These high-confidence performance bounds can be used to rank the performance and risk of a variety of evaluation policies and provide a way to detect reward hacking behaviors.",http://arxiv.org/abs/2002.09089,2020,conferencePaper,"Brown, Daniel S.; Coleman, Russell; Srinivasan, Ravi; Niekum, Scott","arXiv:2002.09089 [cs, stat]",TAI safety research | |
Specification gaming: the flip side of AI ingenuity,"Specification gaming is a behaviour that satisfies the literal specification of an objective without achieving the intended outcome. We have all had experiences with specification gaming, even if not by this name. Readers may have heard the myth of King Midas and the golden touch, in which the king asks that anything he touches be turned to gold - but soon finds that even food and drink turn to metal in his hands. In the real world, when rewarded for doing well on a homework assignment, a student might copy another student to get the right answers, rather than learning the material - and thus exploit a loophole in the task specification.",deepmind.com/blog/article/Specification-gaming-the-flip-side-of-AI-ingenuity,2020,blogPost,"Krakovna, Victoria; Uesato, Jonathan; Mikulik, Vladimir; Rahtz, Matthew; Everitt, Tom; Kumar, Ramana; Kenton, Zachary; Leike, Jan; Legg, Shane",Deepmind,TAI safety research | |
Vingean Reflection: Reliable Reasoning for Self-Improving Agents,"Today, human-level machine intelligence is in the domain of futurism, but there is every reason to expect that it will be developed eventually. Once artificial agents become able to improve themselves further, they may far surpass human intelligence, making it vitally important to ensure that the result of an “intelligence explosion” is aligned with human interests. In this paper, we discuss one aspect of this challenge: ensuring that the initial agent’s reasoning about its future versions is reliable, even if these future versions are far more intelligent than the current reasoner. We refer to reasoning of this sort as Vingean reflection.",https://intelligence.org/files/VingeanReflection.pdf,2015,report,"Fallenstein, Benja; Soares, Nate",,TAI safety research | |
Directed Policy Gradient for Safe Reinforcement Learning with Human Advice,"Many currently deployed Reinforcement Learning agents work in an environment shared with humans, be them co-workers, users or clients. It is desirable that these agents adjust to people's preferences, learn faster thanks to their help, and act safely around them. We argue that most current approaches that learn from human feedback are unsafe: rewarding or punishing the agent a-posteriori cannot immediately prevent it from wrong-doing. In this paper, we extend Policy Gradient to make it robust to external directives, that would otherwise break the fundamentally on-policy nature of Policy Gradient. Our technique, Directed Policy Gradient (DPG), allows a teacher or backup policy to override the agent before it acts undesirably, while allowing the agent to leverage human advice or directives to learn faster. Our experiments demonstrate that DPG makes the agent learn much faster than reward-based approaches, while requiring an order of magnitude less advice.",http://arxiv.org/abs/1808.04096,2018,manuscript,"Plisnier, Hélène; Steckelmacher, Denis; Brys, Tim; Roijers, Diederik M.; Nowé, Ann",,TAI safety research | |
Cognitive prostheses for goal achievement,"Procrastination takes a considerable toll on people’s lives, the economy and society at large. Procrastination is often a consequence of people’s propensity to prioritize their immediate experiences over the long-term consequences of their actions. This suggests that aligning immediate rewards with long-term values could be a promising way to help people make more future-minded decisions and overcome procrastination. Here we develop an approach to decision support that leverages artificial intelligence and game elements to restructure challenging sequential decision problems in such a way that it becomes easier for people to take the right course of action. A series of four increasingly realistic experiments suggests that this approach can enable people to make better decisions faster, procrastinate less, complete their work on time and waste less time on unimportant tasks. These findings suggest that our method is a promising step towards developing cognitive prostheses that help people achieve their goals.",https://www.nature.com/articles/s41562-019-0672-9,2019,journalArticle,"Lieder, Falk; Chen, Owen X.; Krueger, Paul M.; Griffiths, Thomas L.",Nature Human Behaviour,not TAI safety research | |
Forecasting Transformative AI: An Expert Survey,"Transformative AI technologies have the potential to reshape critical aspects of society in the near future. However, in order to properly prepare policy initiatives for the arrival of such technologies accurate forecasts and timelines are necessary. A survey was administered to attendees of three AI conferences during the summer of 2018 (ICML, IJCAI and the HLAI conference). The survey included questions for estimating AI capabilities over the next decade, questions for forecasting five scenarios of transformative AI and questions concerning the impact of computational resources in AI research. Respondents indicated a median of 21.5% of human tasks (i.e., all tasks that humans are currently paid to do) can be feasibly automated now, and that this figure would rise to 40% in 5 years and 60% in 10 years. Median forecasts indicated a 50% probability of AI systems being capable of automating 90% of current human tasks in 25 years and 99% of current human tasks in 50 years. The conference of attendance was found to have a statistically significant impact on all forecasts, with attendees of HLAI providing more optimistic timelines with less uncertainty. These findings suggest that AI experts expect major advances in AI technology to continue over the next decade to a degree that will likely have profound transformative impacts on society.",http://arxiv.org/abs/1901.08579,2019,manuscript,"Gruetzemacher, Ross; Paradice, David; Lee, Kang Bok",,TAI safety research | |
Guide Me: Interacting with Deep Networks,"Interaction and collaboration between humans and intelligent machines has become increasingly important as machine learning methods move into real-world applications that involve end users. While much prior work lies at the intersection of natural language and vision, such as image captioning or image generation from text descriptions, less focus has been placed on the use of language to guide or improve the performance of a learned visual processing algorithm. In this paper, we explore methods to flexibly guide a trained convolutional neural network through user input to improve its performance during inference. We do so by inserting a layer that acts as a spatio-semantic guide into the network. This guide is trained to modify the network's activations, either directly via an energy minimization scheme or indirectly through a recurrent model that translates human language queries to interaction weights. Learning the verbal interaction is fully automatic and does not require manual text annotations. We evaluate the method on two datasets, showing that guiding a pre-trained network can improve performance, and provide extensive insights into the interaction between the guide and the CNN.",http://arxiv.org/abs/1803.11544,2018,conferencePaper,"Rupprecht, Christian; Laina, Iro; Navab, Nassir; Hager, Gregory D.; Tombari, Federico",Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR),not TAI safety research | |
Thread: Circuits,What can we learn if we invest heavily in reverse engineering a single neural network?,https://distill.pub/2020/circuits,2020,journalArticle,"Cammarata, Nick; Carter, Shan; Goh, Gabriel; Olah, Chris; Petrov, Michael; Schubert, Ludwig",Distill,not TAI safety research | |
Visualizing Representations: Deep Learning and Human Beings - colah's blog,,http://colah.github.io/posts/2015-01-Visualizing-Representations/,2015,blogPost,"Olah, Chris",Colah's blog,not TAI safety research | |
One Decade of Universal Artificial Intelligence,"The first decade of this century has seen the nascency of the first mathematical theory of general artificial intelligence. This theory of Universal Artificial Intelligence (UAI) has made significant contributions to many theoretical, philosophical, and practical AI questions. In a series of papers culminating in book (Hutter, 2005), an exciting sound and complete mathematical model for a super intelligent agent (AIXI) has been developed and rigorously analyzed. While nowadays most AI researchers avoid discussing intelligence, the award-winning PhD thesis (Legg, 2008) provided the philosophical embedding and investigated the UAI-based universal measure of rational intelligence, which is formal, objective and non-anthropocentric. Recently, effective approximations of AIXI have been derived and experimentally investigated in JAIR paper (Veness et al. 2011). This practical breakthrough has resulted in some impressive applications, finally muting earlier critique that UAI is only a theory. For the first time, without providing any domain knowledge, the same agent is able to self-adapt to a diverse range of interactive environments. For instance, AIXI is able to learn from scratch to play TicTacToe, Pacman, Kuhn Poker, and other games by trial and error, without even providing the rules of the games. These achievements give new hope that the grand goal of Artificial General Intelligence is not elusive. This article provides an informal overview of UAI in context. It attempts to gently introduce a very theoretical, formal, and mathematical subject, and discusses philosophical and technical ingredients, traits of intelligence, some social questions, and the past and future of UAI.",http://arxiv.org/abs/1202.6153,2012,journalArticle,"Hutter, Marcus",Theoretical Foundations of Artificial General Intelligence,TAI safety research | |
Should Artificial Intelligence Governance be Centralised?: Design Lessons from History,,https://dl.acm.org/doi/10.1145/3375627.3375857,2020,conferencePaper,"Cihon, Peter; Maas, Matthijs M.; Kemp, Luke","Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society",TAI safety research | |
Feature Expansive Reward Learning: Rethinking Human Input,"In collaborative human-robot scenarios, when a person is not satisfied with how a robot performs a task, they can intervene to correct it. Reward learning methods enable the robot to adapt its reward function online based on such human input. However, due to the real-time nature of the input, this online adaptation requires low sample complexity algorithms which rely on simple functions of handcrafted features. In practice, pre-specifying an exhaustive set of features the person might care about is impossible; what should the robot do when the human correction cannot be explained by the features it already has access to? Recent progress in deep Inverse Reinforcement Learning (IRL) suggests that the robot could fall back on demonstrations: ask the human for demonstrations of the task, and recover a reward defined over not just the known features, but also the raw state space. Our insight is that rather than implicitly learning about the missing feature(s) from task demonstrations, the robot should instead ask for data that explicitly teaches it about what it is missing. We introduce a new type of human input, in which the person guides the robot from areas of the state space where the feature she is teaching is highly expressed to states where it is not. We propose an algorithm for learning the feature from the raw state space and integrating it into the reward function. By focusing the human input on the missing feature, our method decreases sample complexity and improves generalization of the learned reward over the above deep IRL baseline. We show this in experiments with a 7DOF robot manipulator. Finally, we discuss our method’s potential implications for deep reward learning more broadly: taking a divide-and-conquer approach that focuses on important features separately before learning from demonstrations can improve generalization in tasks where such features are easy for the human to teach.",http://arxiv.org/abs/2006.13208,2020,manuscript,"Bobu, Andreea; Wiggert, Marius; Tomlin, Claire; Dragan, Anca D.",,not TAI safety research | |
Emergent Complexity via Multi-Agent Competition,"Reinforcement learning algorithms can train agents that solve problems in complex, interesting environments. Normally, the complexity of the trained agent is closely related to the complexity of the environment. This suggests that a highly capable agent requires a complex environment for training. In this paper, we point out that a competitive multi-agent environment trained with self-play can produce behaviors that are far more complex than the environment itself. We also point out that such environments come with a natural curriculum, because for any skill level, an environment full of agents of this level will have the right level of difficulty. This work introduces several competitive multi-agent environments where agents compete in a 3D world with simulated physics. The trained agents learn a wide variety of complex and interesting skills, even though the environment themselves are relatively simple. The skills include behaviors such as running, blocking, ducking, tackling, fooling opponents, kicking, and defending using both arms and legs. A highlight of the learned behaviors can be found here: https://goo.gl/eR7fbX",http://arxiv.org/abs/1710.03748,2018,conferencePaper,"Bansal, Trapit; Pachocki, Jakub; Sidor, Szymon; Sutskever, Ilya; Mordatch, Igor",arXiv:1710.03748 [cs],not TAI safety research | |
Learning Agile Robotic Locomotion Skills by Imitating Animals,"Reproducing the diverse and agile locomotion skills of animals has been a longstanding challenge in robotics. While manually-designed controllers have been able to emulate many complex behaviors, building such controllers involves a time-consuming and difficult development process, often requiring substantial expertise of the nuances of each skill. Reinforcement learning provides an appealing alternative for automating the manual effort involved in the development of controllers. However, designing learning objectives that elicit the desired behaviors from an agent can also require a great deal of skill-specific expertise. In this work, we present an imitation learning system that enables legged robots to learn agile locomotion skills by imitating real-world animals. We show that by leveraging reference motion data, a single learning-based approach is able to automatically synthesize controllers for a diverse repertoire behaviors for legged robots. By incorporating sample efficient domain adaptation techniques into the training process, our system is able to learn adaptive policies in simulation that can then be quickly adapted for real-world deployment. To demonstrate the effectiveness of our system, we train an 18-DoF quadruped robot to perform a variety of agile behaviors ranging from different locomotion gaits to dynamic hops and turns.",http://arxiv.org/abs/2004.00784,2020,conferencePaper,"Peng, Xue Bin; Coumans, Erwin; Zhang, Tingnan; Lee, Tsang-Wei; Tan, Jie; Levine, Sergey",arXiv:2004.00784 [cs],not TAI safety research | |
Antitrust-Compliant AI Industry Self-Regulation,"The touchstone of antitrust compliance is competition. To be legally permissible, any industrial restraint on trade must have sufficient countervailing procompetitive justifications. Usually, anticompetitive horizontal agreements like boycotts (including a refusal to produce certain products) are per se illegal.",https://cullenokeefe.com/blog/antitrust-compliant-ai-industry-self-regulation,2020,manuscript,"O’Keefe, Cullen",,TAI safety research | |
Machine Learning Explainability for External Stakeholders,"As machine learning is increasingly deployed in high-stakes contexts affecting people's livelihoods, there have been growing calls to open the black box and to make machine learning algorithms more explainable. Providing useful explanations requires careful consideration of the needs of stakeholders, including end-users, regulators, and domain experts. Despite this need, little work has been done to facilitate inter-stakeholder conversation around explainable machine learning. To help address this gap, we conducted a closed-door, day-long workshop between academics, industry experts, legal scholars, and policymakers to develop a shared language around explainability and to understand the current shortcomings of and potential solutions for deploying explainable machine learning in service of transparency goals. We also asked participants to share case studies in deploying explainable machine learning at scale. In this paper, we provide a short summary of various case studies of explainable machine learning, lessons from those studies, and discuss open challenges.",https://arxiv.org/abs/2007.05408v1,2020,conferencePaper,"Bhatt, Umang; Andrus, McKane; Weller, Adrian; Xiang, Alice",,TAI safety research | |
Avoiding Wireheading with Value Reinforcement Learning,"How can we design good goals for arbitrarily intelligent agents? Reinforcement learning (RL) is a natural approach. Unfortunately, RL does not work well for generally intelligent agents, as RL agents are incentivised to shortcut the reward sensor for maximum reward -- the so-called wireheading problem. In this paper we suggest an alternative to RL called value reinforcement learning (VRL). In VRL, agents use the reward signal to learn a utility function. The VRL setup allows us to remove the incentive to wirehead by placing a constraint on the agent's actions. The constraint is defined in terms of the agent's belief distributions, and does not require an explicit specification of which actions constitute wireheading.",http://arxiv.org/abs/1605.03143,2016,conferencePaper,"Everitt, Tom; Hutter, Marcus",AGI 2016: Artificial General Intelligence,TAI safety research | |
Principles for the Application of Human Intelligence,"Before humans become the standard way in which we make decisions, we need to consider the risks and ensure implementation of human decision-making systems does not cause widespread harm.",https://behavioralscientist.org/principles-for-the-application-of-human-intelligence/,2019,blogPost,"Collins, Jason",Behavioral Scientist,not TAI safety research | |
Backup utility functions as a fail-safe AI technique,"Many experts believe that AIs will, within the not-too-distant future, become powerful enough for their decisions to have tremendous impact. Unfortunately, setting up AI goal systems in a way that results in benevolent behavior is expected to be difficult, and we cannot be certain to get it completely right on the first attempt. We should therefore account for the possibility that the goal systems fail to implement our values the intended way. In this paper, we propose the idea of backup utility functions: Secondary utility functions that are used in case the primary ones “fail”. We also describe how this approach can be generalized to the use of multi-layered utility functions, some of which can fail without affecting the final outcome as badly as without the backup mechanism.",https://longtermrisk.org/files/backup-utility-functions.pdf,2016,manuscript,"Oesterheld, Caspar",,TAI safety research | |
Predicting human decisions with behavioral theories and machine learning,"Behavioral decision theories aim to explain human behavior. Can they help predict it? An open tournament for prediction of human choices in fundamental economic decision tasks is presented. The results suggest that integration of certain behavioral theories as features in machine learning systems provides the best predictions. Surprisingly, the most useful theories for prediction build on basic properties of human and animal learning and are very different from mainstream decision theories that focus on deviations from rational choice. Moreover, we find that theoretical features should be based not only on qualitative behavioral insights (e.g. loss aversion), but also on quantitative behavioral foresights generated by functional descriptive models (e.g. Prospect Theory). Our analysis prescribes a recipe for derivation of explainable, useful predictions of human decisions.",http://arxiv.org/abs/1904.06866,2019,manuscript,"Plonsky, Ori; Apel, Reut; Ert, Eyal; Tennenholtz, Moshe; Bourgin, David; Peterson, Joshua C.; Reichman, Daniel; Griffiths, Thomas L.; Russell, Stuart J.; Carter, Evan C.; Cavanagh, James F.; Erev, Ido",,TAI safety research | |
"Exchange-Traded Funds, Market Structure, and the Flash Crash",,https://www.tandfonline.com/doi/full/10.2469/faj.v68.n4.6,2012,journalArticle,"Madhavan, Ananth",Financial Analysts Journal,not TAI safety research | |
A general model of safety-oriented AI development,"This may be trivial or obvious for a lot of people, but it doesn't seem like anyone has bothered to write it down (or I haven't looked hard enough). It started out as a generalization of Paul Christiano's IDA, but also covers things like safe recursive self-improvement. Start with a team of one or more humans (researchers, programmers, trainers, and/or overseers), with access to zero or more AIs (initially as assistants). The human/AI team in each round develops a new AI and adds it to the team, and repeats this until maturity in AI technology is achieved. Safety/alignment is ensured by having some set of safety/alignment properties on the team that is inductively maintained by the development process. The reason I started thinking in this direction is that Paul's approach seemed very hard to knock down, because any time a flaw or difficulty is pointed out or someone expresses skepticism on some technique that it uses or the overall safety invariant, there's always a list of other techniques or invariants that could be substituted in for that part (sometimes in my own brain as I tried to criticize some part of it). Eventually I realized this shouldn't be surprising because IDA is an instance of this more general model of safety-oriented AI development, so there are bound to be many points near it in the space of possible safety-oriented AI development practices. (Again, this may already be obvious to others including Paul, and in their minds IDA is perhaps already a cluster of possible development practices consisting of the most promising safety techniques and invariants, rather than a single point.) If this model turns out not to have been written down before, perhaps it should be assigned a name, like Iterated Safety-Invariant AI-Assisted AI Development, or something pithier?",https://www.alignmentforum.org/posts/idb5Ppp9zghcichJ5/a-general-model-of-safety-oriented-ai-development,2018,blogPost,Wei Dai,AI Alignment Forum,TAI safety research | |
The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions,"The last few years have seen a proliferation of principles for AI ethics. There is substantial overlap between different sets of principles, with widespread agreement that AI should be used for the common good, should not be used to harm people or undermine their rights, and should respect widely held values such as fairness, privacy, and autonomy. While articulating and agreeing on principles is important, it is only a starting point. Drawing on comparisons with the field of bioethics, we highlight some of the limitations of principles: in particular, they are often too broad and high-level to guide ethics in practice. We suggest that an important next step for the field of AI ethics is to focus on exploring the tensions that inevitably arise as we try to implement principles in practice. By explicitly recognising these tensions we can begin to make decisions about how they should be resolved in specific cases, and develop frameworks and guidelines for AI ethics that are rigorous and practically relevant. We discuss some different specific ways that tensions arise in AI ethics, and what processes might be needed to resolve them.",,2019,conferencePaper,"Whittlestone, Jess; Nyrup, Rune; Alexandrova, Anna; Cave, Stephen","AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society",TAI safety research | |
Enhancing metacognitive reinforcement learning using reward structures and feedback,"How do we learn to think better, and what can we do to promote such metacognitive learning? Here, we propose that cognitive growth proceeds through metacognitive reinforcement learning. We apply this theory to model how people learn how far to plan ahead and test its predictions about the speed of metacognitive learning in two experiments. In the first experiment, we find that our model can discern a reward structure that promotes metacognitive reinforcement learning from one that hinders it. In the second experiment, we show that our model can be used to design a feedback mechanism that enhances metacognitive reinforcement learning in an environment that hinders learning. Our results suggest that modeling metacognitive learning is a promising step towards promoting cognitive growth.",,2017,conferencePaper,"Krueger, Paul M; Lieder, Falk; Griffiths, Thomas L",39th Annual Meeting of the Cognitive Science Society,not TAI safety research | |
Learning agents for uncertain environments (extended abstract),,http://portal.acm.org/citation.cfm?doid=279943.279964,1998,conferencePaper,"Russell, Stuart",Proceedings of the eleventh annual conference on Computational learning theory - COLT' 98,TAI safety research | |
Existential Risk and Growth,"Human activity can create or mitigate risks of catastrophes, such as nuclear war, climate change, pandemics, or artificial intelligence run amok. These could even imperil the survival of human civilization. What is the relationship between economic growth and such existential risks? In a model of directed technical change, with moderate parameters, existential risk follows a Kuznets-style inverted U-shape. This suggests we could be living in a unique “time of perils,” having developed technologies advanced enough to threaten our permanent destruction, but not having grown wealthy enough yet to be willing to spend sufficiently on safety. Accelerating growth during this “time of perils” initially increases risk, but improves the chances of humanity’s survival in the long run. Conversely, even short-term stagnation could substantially curtail the future of humanity.",,2020,report,"Aschenbrenner, Leopold",,not TAI safety research | |
Coherence arguments do not imply goal-directed behavior,"One of the most pleasing things about probability and expected utility theory is that there are many coherence arguments that suggest that these are the “correct” ways to reason. If you deviate from what the theory prescribes, then you must be executing a dominated strategy. There must be some other strategy that never does any worse than your strategy, but does strictly better than your strategy with certainty in at least one situation. There’s a good explanation of these arguments here. We shouldn’t expect mere humans to be able to notice any failures of coherence in a superintelligent agent, since if we could notice these failures, so could the agent. So we should expect that powerful agents appear coherent to us. (Note that it is possible that the agent doesn’t fix the failures because it would not be worth it -- in this case, the argument says that we will not be able to notice any exploitable failures.) Taken together, these arguments suggest that we should model an agent much smarter than us as an expected utility (EU) maximizer. And many people agree that EU maximizers are dangerous. So does this mean we’re doomed? I don’t think so: it seems to me that the problems about EU maximizers that we’ve identified are actually about goal-directed behavior or explicit reward maximizers. The coherence theorems say nothing about whether an AI system must look like one of these categories. This suggests that we could try building an AI system that can be modeled as an EU maximizer, yet doesn’t fall into one of these two categories, and so doesn’t have all of the problems that we worry about. Note that there are two different flavors of arguments that the AI systems we build will be goal-directed agents (which are dangerous if the goal is even slightly wrong): * Simply knowing that an agent is intelligent lets us infer that it is goal-directed. (ETA: See this comment for more details on this argument.) * Humans are particularly likely to build goal-directed agen",https://www.alignmentforum.org/posts/NxF5G6CJiof6cemTw/coherence-arguments-do-not-imply-goal-directed-behavior,2018,blogPost,"Shah, Rohin",AI Alignment Forum,TAI safety research | |
Two Alternatives to Logical Counterfactuals,"The following is a critique of the idea of logical counterfactuals. The idea of logical counterfactuals has appeared in previous agent foundations research (especially at MIRI): here, here. “…",https://unstableontology.com/2020/04/01/alternatives-to-logical-counterfactuals/,2020,blogPost,"Taylor, Jessica",Unstable Ontology,TAI safety research | |
The race for an artificial general intelligence: implications for public policy,"An arms race for an artificial general intelligence (AGI) would be detrimental for and even pose an existential threat to humanity if it results in an unfriendly AGI. In this paper, an all-pay contest model is developed to derive implications for public policy to avoid such an outcome. It is established that, in a winner-takes-all race, where players must invest in R&D, only the most competitive teams will participate. Thus, given the difficulty of AGI, the number of competing teams is unlikely ever to be very large. It is also established that the intention of teams competing in an AGI race, as well as the possibility of an intermediate outcome (prize), is important. The possibility of an intermediate prize will raise the probability of finding the dominant AGI application and, hence, will make public control more urgent. It is recommended that the danger of an unfriendly AGI can be reduced by taxing AI and using public procurement. This would reduce the pay-off of contestants, raise the amount of R&D needed to compete, and coordinate and incentivize co-operation. This will help to alleviate the control and political problems in AI. Future research is needed to elaborate the design of systems of public procurement of AI innovation and for appropriately adjusting the legal frameworks underpinning high-tech innovation, in particular dealing with patenting by AI.",https://doi.org/10.1007/s00146-019-00887-x,2019,journalArticle,"Naudé, Wim; Dimitri, Nicola",AI & Society,TAI safety research | |
Neuroevolution of Self-Interpretable Agents,"Inattentional blindness is the psychological phenomenon that causes one to miss things in plain sight. It is a consequence of the selective attention in perception that lets us remain focused on important parts of our world without distraction from irrelevant details. Motivated by selective attention, we study the properties of artificial agents that perceive the world through the lens of a self-attention bottleneck. By constraining access to only a small fraction of the visual input, we show that their policies are directly interpretable in pixel space. We find neuroevolution ideal for training self-attention architectures for vision-based reinforcement learning (RL) tasks, allowing us to incorporate modules that can include discrete, non-differentiable operations which are useful for our agent. We argue that self-attention has similar properties as indirect encoding, in the sense that large implicit weight matrices are generated from a small number of key-query parameters, thus enabling our agent to solve challenging vision based tasks with at least 1000x fewer parameters than existing methods. Since our agent attends to only task critical visual hints, they are able to generalize to environments where task irrelevant elements are modified while conventional methods fail. Videos of our results and source code available at https://attentionagent.github.io/",http://arxiv.org/abs/2003.08165,2020,conferencePaper,"Tang, Yujin; Nguyen, Duong; Ha, David",Proceedings of the 2020 Genetic and Evolutionary Computation Conference,not TAI safety research | |
Brainjacking in deep brain stimulation and autonomy,,,2018,journalArticle,"Pugh, Jonathan; Pycroft, Laurie; Sandberg, Anders; Aziz, Tipu; Savulescu, Julian",Ethics and information technology,not TAI safety research | |
AI development incentive gradients are not uniformly terrible,"Much of the work for this post was done together with Nuño Sempere Perhaps you think that your values will be best served if the AGI you (or your team, company or nation) are developing is deployed first. Would you decide that it's worth cutting a few corners, reducing your safety budget, and pushing ahead to try and get your AI out the door first? It seems plausible, and worrying, that you might. And if your competitors reason symmetrically, we would get a ""safety race to the bottom"". On the other hand, perhaps you think your values will be better served if your enemy wins than if either of you accidentally produces an unfriendly AI. Would you decide the safety costs to improving your chances aren't worth it? In a simple two player model, you should only shift funds from safety to capabilities if (the relative₁ decrease in chance of friendliness) / (the relative₁ increase in the chance of winning) < (expected relative₂ loss of value if your enemy wins rather than you). Here, the relative₁ increases and decreases are relative to the current values. The relative₂ loss of value is relative to the expected value if you win. The plan of this post is as follows: 1. Consider a very simple model that leads to a safety race. Identify unrealistic assumptions which are driving its results. 2. Remove some of the unrealistic assumptions and generate a different model. Derive the inequality expressed above. 3. Look at some specific example cases, and see how they affect safety considerations. A PARTLY DISCONTINUOUS MODEL Let's consider a model with two players with the same amount of resources. Each player's choice is what fraction of their resources to devote to safety, rather than capabilities. Whichever player contributes more to capabilities wins the race. If you win the race, you either get a good outcome or a bad outcome. Your chance of getting a good outcome increases continuously with the amount you spent on safety. If the other player wins, you get a bad outcome.",https://www.lesswrong.com/posts/bkG4qj9BFEkNva3EX/ai-development-incentive-gradients-are-not-uniformly,2018,blogPost,rk,LessWrong,TAI safety research | |
What is ambitious value learning?,"I think of ambitious value learning as a proposed solution to the specification problem, which I define as the problem of defining the behavior that we would want to see from our AI system. I italicize “defining” to emphasize that this is not the problem of actually computing behavior that we want to see -- that’s the full AI safety problem. Here we are allowed to use hopelessly impractical schemes, as long as the resulting definition would allow us to in theory compute the behavior that an AI system would take, perhaps with assumptions like infinite computing power or arbitrarily many queries to a human. (Although we do prefer specifications that seem like they could admit an efficient implementation.) In terms of DeepMind’s classification, we are looking for a design specification that exactly matches the ideal specification. HCH and indirect normativity are examples of attempts at such specifications. We will consider a model in which our AI system is maximizing the expected utility of some explicitly represented utility function that can depend on history. (It does not matter materially whether we consider utility functions or reward functions, as long as they can depend on history.) The utility function may be learned from data, or designed by hand, but it must be an explicit part of the AI that is then maximized. I will not justify this model for now, but simply assume it by fiat and see where it takes us. I’ll note briefly that this model is often justified by the VNM utility theorem and AIXI, and as the natural idealization of reinforcement learning, which aims to maximize the expected sum of rewards, although typically rewards in RL depend only on states. A lot of conceptual arguments, as well as experiences with specification gaming, suggest that we are unlikely to be able to simply think hard and write down a good specification, since even small errors in specifications can lead to bad results. However, machine learning is particularly good at narro",https://www.alignmentforum.org/posts/5eX8ko7GCxwR5N9mN/what-is-ambitious-value-learning,2018,blogPost,"Shah, Rohin",AI Alignment Forum,TAI safety research | |